Advanced Computing Tools and Models for Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryne, Robert; Ryne, Robert D.
2008-06-11
This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
Computational Accelerator Physics. Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisognano, J.J.; Mondelli, A.A.
1997-04-01
The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less
Support News Publications Computing for Experiments Computing for Neutrino and Muon Physics Computing for Collider Experiments Computing for Astrophysics Research and Development Accelerator Modeling ComPASS - Impact of Detector Simulation on Particle Physics Collider Experiments Daniel Elvira's paper "Impact
Fermilab | Science at Fermilab | Experiments & Projects | Intensity
Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, P.; /Fermilab; Cary, J.
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less
Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T
2012-01-01
This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.
Accelerating Innovation: How Nuclear Physics Benefits Us All
DOE R&D Accomplishments Database
2011-01-01
Innovation has been accelerated by nuclear physics in the areas of improving our health; making the world safer; electricity, environment, archaeology; better computers; contributions to industry; and training the next generation of innovators.
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
Fermilab | Tritium at Fermilab | Frequently asked questions
computing Quantum initiatives Research and development Key discoveries Benefits of particle physics Particle Accelerators Leading accelerator technology Accelerator complex Illinois Accelerator Research Center Fermilab questions about tritium Tritium in surface water Indian Creek Kress Creek Ferry Creek Tritium in sanitary
LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN
NASA Astrophysics Data System (ADS)
Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor
2017-12-01
The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
Nuclear Physics Laboratory 1979 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adelberger, E.G.
1979-07-01
Research progress is reported in the following areas: astrophysics and cosmology, fundamental symmetries, nuclear structure, radiative capture, medium energy physics, heavy ion reactions, research by users and visitors, accelerator and ion source development, instrumentation and experimental techniques, and computers and computing. Publications are listed. (WHK)
A wireless breathing-training support system for kinesitherapy.
Tawa, Hiroki; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Caldwell, W Morton
2009-01-01
We have developed a new wireless breathing-training support system for kinesitherapy. The system consists of an optical sensor, an accelerometer, a microcontroller, a Bluetooth module and a laptop computer. The optical sensor, which is attached to the patient's chest, measures chest circumference. The low frequency components of circumference are mainly generated by breathing. The optical sensor outputs the circumference as serial digital data. The accelerometer measures the dynamic acceleration force produced by exercise, such as walking. The microcontroller sequentially samples this force. The acceleration force and chest circumference are sent sequentially via Bluetooth to a physical therapist's laptop computer, which receives and stores the data. The computer simultaneously displays these data so that the physical therapist can monitor the patient's breathing and acceleration waveforms and give instructions to the patient in real time during exercise. Moreover, the system enables a quantitative training evaluation and calculation the volume of air inspired and expired by the lungs.
TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame
NASA Astrophysics Data System (ADS)
Teck Chee, Chia; Hong, Chia Yee
1999-03-01
Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.
Proposed Projects and Experiments Fermilab's Tevatron Questions for the Universe Theory Computing High Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator Science in Medicine Follow
Terascale Computing in Accelerator Science and Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Kwok
2002-08-21
We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1993-07-01
The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of themore » physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermilab
2017-09-01
Scientists, engineers and programmers at Fermilab are tackling today’s most challenging computational problems. Their solutions, motivated by the needs of worldwide research in particle physics and accelerators, help America stay at the forefront of innovation.
Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.
2017-03-01
The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.
Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade
NASA Astrophysics Data System (ADS)
Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko
2017-07-01
The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.
Experiments Using Cell Phones in Physics Classroom Education: The Computer-Aided "g" Determination
ERIC Educational Resources Information Center
Vogt, Patrik; Kuhn, Jochen; Muller, Sebastian
2011-01-01
This paper continues the collection of experiments that describe the use of cell phones as experimental tools in physics classroom education. We describe a computer-aided determination of the free-fall acceleration "g" using the acoustical Doppler effect. The Doppler shift is a function of the speed of the source. Since a free-falling objects…
Generation of nanosecond neutron pulses in vacuum accelerating tubes
NASA Astrophysics Data System (ADS)
Didenko, A. N.; Shikanov, A. E.; Rashchikov, V. I.; Ryzhkov, V. I.; Shatokhin, V. L.
2014-06-01
The generation of neutron pulses with a duration of 1-100 ns using small vacuum accelerating tubes is considered. Two physical models of acceleration of short deuteron bunches in pulse neutron generators are described. The dependences of an instantaneous neutron flux in accelerating tubes on the parameters of pulse neutron generators are obtained using computer simulation. The results of experimental investigation of short-pulse neutron generators based on the accelerating tube with a vacuum-arc deuteron source, connected in the circuit with a discharge peaker, and an accelerating tube with a laser deuteron source, connected according to the Arkad'ev-Marx circuit, are given. In the experiments, the neutron yield per pulse reached 107 for a pulse duration of 10-100 ns. The resultant experimental data are in satisfactory agreement with the results of computer simulation.
Educating and Training Accelerator Scientists and Technologists for Tomorrow
NASA Astrophysics Data System (ADS)
Barletta, William; Chattopadhyay, Swapan; Seryi, Andrei
2012-01-01
Accelerator science and technology is inherently an integrative discipline that combines aspects of physics, computational science, electrical and mechanical engineering. As few universities offer full academic programs, the education of accelerator physicists and engineers for the future has primarily relied on a combination of on-the-job training supplemented with intensive courses at regional accelerator schools. This article describes the approaches being used to satisfy the educational curiosity of a growing number of interested physicists and engineers.
Educating and Training Accelerator Scientists and Technologists for Tomorrow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barletta, William A.; Chattopadhyay, Swapan; Seryi, Andrei
2012-07-01
Accelerator science and technology is inherently an integrative discipline that combines aspects of physics, computational science, electrical and mechanical engineering. As few universities offer full academic programs, the education of accelerator physicists and engineers for the future has primarily relied on a combination of on-the-job training supplemented with intense courses at regional accelerator schools. This paper describes the approaches being used to satisfy the educational interests of a growing number of interested physicists and engineers.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.
2017-02-01
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...
2016-07-12
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less
Neurons compute internal models of the physical laws of motion.
Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David
2004-07-29
A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.
Quantum Accelerators for High-performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.
We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less
Implementing Computer Based Laboratories
NASA Astrophysics Data System (ADS)
Peterson, David
2001-11-01
Physics students at Francis Marion University will complete several required laboratory exercises utilizing computer-based Vernier probes. The simple pendulum, the acceleration due to gravity, simple harmonic motion, radioactive half lives, and radiation inverse square law experiments will be incorporated into calculus-based and algebra-based physics courses. Assessment of student learning and faculty satisfaction will be carried out by surveys and test results. Cost effectiveness and time effectiveness assessments will be presented. Majors in Computational Physics, Health Physics, Engineering, Chemistry, Mathematics and Biology take these courses, and assessments will be categorized by major. To enhance the computer skills of students enrolled in the courses, MAPLE will be used for further analysis of the data acquired during the experiments. Assessment of these enhancement exercises will also be presented.
Accelerator-based techniques for the support of senior-level undergraduate physics laboratories
NASA Astrophysics Data System (ADS)
Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.
2001-07-01
Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.
Method for computationally efficient design of dielectric laser accelerator structures
Hughes, Tyler; Veronis, Georgios; Wootton, Kent P.; ...
2017-06-22
Here, dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of onlymore » two full-field electromagnetic simulations, the original and ‘adjoint’. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.« less
Opportunities for Computational Discovery in Basic Energy Sciences
NASA Astrophysics Data System (ADS)
Pederson, Mark
2011-03-01
An overview of the broad-ranging support of computational physics and computational science within the Department of Energy Office of Science will be provided. Computation as the third branch of physics is supported by all six offices (Advanced Scientific Computing, Basic Energy, Biological and Environmental, Fusion Energy, High-Energy Physics, and Nuclear Physics). Support focuses on hardware, software and applications. Most opportunities within the fields of~condensed-matter physics, chemical-physics and materials sciences are supported by the Officeof Basic Energy Science (BES) or through partnerships between BES and the Office for Advanced Scientific Computing. Activities include radiation sciences, catalysis, combustion, materials in extreme environments, energy-storage materials, light-harvesting and photovoltaics, solid-state lighting and superconductivity.~ A summary of two recent reports by the computational materials and chemical communities on the role of computation during the next decade will be provided. ~In addition to materials and chemistry challenges specific to energy sciences, issues identified~include a focus on the role of the domain scientist in integrating, expanding and sustaining applications-oriented capabilities on evolving high-performance computing platforms and on the role of computation in accelerating the development of innovative technologies. ~~
Better physical activity classification using smartphone acceleration sensor.
Arif, Muhammad; Bilal, Mohsin; Kattan, Ahmed; Ahamed, S Iqbal
2014-09-01
Obesity is becoming one of the serious problems for the health of worldwide population. Social interactions on mobile phones and computers via internet through social e-networks are one of the major causes of lack of physical activities. For the health specialist, it is important to track the record of physical activities of the obese or overweight patients to supervise weight loss control. In this study, acceleration sensor present in the smartphone is used to monitor the physical activity of the user. Physical activities including Walking, Jogging, Sitting, Standing, Walking upstairs and Walking downstairs are classified. Time domain features are extracted from the acceleration data recorded by smartphone during different physical activities. Time and space complexity of the whole framework is done by optimal feature subset selection and pruning of instances. Classification results of six physical activities are reported in this paper. Using simple time domain features, 99 % classification accuracy is achieved. Furthermore, attributes subset selection is used to remove the redundant features and to minimize the time complexity of the algorithm. A subset of 30 features produced more than 98 % classification accuracy for the six physical activities.
-performance Computing Grid Computing Networking Mass Storage Plan for the Future State of the Laboratory to help decipher the language of high-energy physics. Virtual Ask-a-Scientist Read transcripts from past online chat sessions. last modified 1/04/2005 email Fermilab Fermi National Accelerator Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.; Yu, G.; Wang, K.
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less
Real-World Physics: A Portable MBL for Field Measurements.
ERIC Educational Resources Information Center
Albergotti, Clifton
1994-01-01
Uses a moderately priced digital multimeter that has output and software compatible with personal computers to make a portable, computer-based data-acquisition system. The system can measure voltage, current, frequency, capacitance, transistor hFE, and temperature. Describes field measures of velocity, acceleration, and temperature as function of…
Experiments Using Cell Phones in Physics Classroom Education: The Computer-Aided g Determination
NASA Astrophysics Data System (ADS)
Vogt, Patrik; Kuhn, Jochen; Müller, Sebastian
2011-09-01
This paper continues the collection of experiments that describe the use of cell phones as experimental tools in physics classroom education.1-4 We describe a computer-aided determination of the free-fall acceleration g using the acoustical Doppler effect. The Doppler shift is a function of the speed of the source. Since a free-falling objects speed is changing linearly with time, the Doppler shift is also changing with time. It is possible to measure this shift using software that is both easy to use and readily available. Students will use the time-dependency of the Doppler shift to experimentally determine the acceleration due to gravity by using a cell phone as a freely falling object emitting a sound with constant frequency.
UTDallas Offline Computing System for B Physics with the Babar Experiment at SLAC
NASA Astrophysics Data System (ADS)
Benninger, Tracy L.
1998-10-01
The University of Texas at Dallas High Energy Physics group is building a high performance, large storage computing system for B physics research with the BaBar experiment (``factory'') at the Stanford Linear Accelerator Center. The goal of this system is to analyze one terabyte of complex Event Store data from BaBar in one to two days. The foundation of the computing system is a Sun E6000 Enterprise multiprocessor system, with additions of a Sun StorEdge L1800 Tape Library, a Sun Workstation for processing batch jobs, staging disks and interface cards. The design considerations, current status, projects underway, and possible upgrade paths will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovelace, III, Henry H.
In accelerator physics, models of a given machine are used to predict the behaviors of the beam, magnets, and radiofrequency cavities. The use of the computational model has become wide spread to ease the development period of the accelerator lattice. There are various programs that are used to create lattices and run simulations of both transverse and longitudinal beam dynamics. The programs include Methodical Accelerator Design(MAD) MAD8, MADX, Zgoubi, Polymorphic Tracking Code (PTC), and many others. In this discussion the BMAD (Baby Methodical Accelerator Design) is presented as an additional tool in creating and simulating accelerator lattices for the studymore » of beam dynamics in the Relativistic Heavy Ion Collider (RHIC).« less
Glowacki, David R; O'Connor, Michael; Calabró, Gaetano; Price, James; Tew, Philip; Mitchell, Thomas; Hyde, Joseph; Tew, David P; Coughtrie, David J; McIntosh-Smith, Simon
2014-01-01
With advances in computational power, the rapidly growing role of computational/simulation methodologies in the physical sciences, and the development of new human-computer interaction technologies, the field of interactive molecular dynamics seems destined to expand. In this paper, we describe and benchmark the software algorithms and hardware setup for carrying out interactive molecular dynamics utilizing an array of consumer depth sensors. The system works by interpreting the human form as an energy landscape, and superimposing this landscape on a molecular dynamics simulation to chaperone the motion of the simulated atoms, affecting both graphics and sonified simulation data. GPU acceleration has been key to achieving our target of 60 frames per second (FPS), giving an extremely fluid interactive experience. GPU acceleration has also allowed us to scale the system for use in immersive 360° spaces with an array of up to ten depth sensors, allowing several users to simultaneously chaperone the dynamics. The flexibility of our platform for carrying out molecular dynamics simulations has been considerably enhanced by wrappers that facilitate fast communication with a portable selection of GPU-accelerated molecular force evaluation routines. In this paper, we describe a 360° atmospheric molecular dynamics simulation we have run in a chemistry/physics education context. We also describe initial tests in which users have been able to chaperone the dynamics of 10-alanine peptide embedded in an explicit water solvent. Using this system, both expert and novice users have been able to accelerate peptide rare event dynamics by 3-4 orders of magnitude.
High Energy Density Physics and Exotic Acceleration Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowan, T.; /General Atomics, San Diego; Colby, E.
2005-09-27
The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And wemore » saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is proving to be a very important field for diverse applications such as muon cooling, fusion energy research, and ultra-bright particle and radiation generation with high intensity lasers. We had several talks on these and other subjects, and many joint sessions with the Computational group, the EM Structures group, and the Beam Generation group. We summarize our groups' work in the following categories: vacuum acceleration schemes; ion acceleration; particle transport in solids; and applications to high energy density phenomena.« less
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
NASA Astrophysics Data System (ADS)
Aiken, John; Schatz, Michael; Burk, John; Caballero, Marcos; Thoms, Brian
2012-03-01
We describe the assessment of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). The impact of teaching computation is evaluated through a proctored assignment that asks the students to complete a provided program to represent the correct motion. Using questions isomorphic to the Force Concept Inventory we gauge students understanding of force in relation to the simulation. The students are given an open ended essay question that asks them to explain the steps they would use to model a physical situation. We also investigate the attitudes and prior experiences of each student using the Computation Modeling in Physics Attitudinal Student Survey (COMPASS) developed at Georgia Tech as well as a prior computational experiences survey.
Selected topics in particle accelerators: Proceedings of the CAP meetings. Volume 5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsa, Z.
1995-10-01
This Report includes copies of transparencies and notes from the presentations made at the Center for Accelerator Physics at Brookhaven National Laboratory Editing and changes to the authors` contributions in this Report were made only to fulfill the publication requirements. This volume includes notes and transparencies on nine presentations: ``The Energy Exchange and Efficiency Consideration in Klystrons``, ``Some Properties of Microwave RF Sources for Future Colliders + Overview of Microwave Generation Activity at the University of Maryland``, ``Field Quality Improvements in Superconducting Magnets for RHIC``, ``Hadronic B-Physics``, ``Spiking Pulses from Free Electron Lasers: Observations and Computational Models``, ``Crystalline Beams inmore » Circular Accelerators``, ``Accumulator Ring for AGS & Recent AGS Performance``, ``RHIC Project Machine Status``, and ``Gamma-Gamma Colliders.``« less
Exploring the Integration of Computational Modeling in the ASU Modeling Curriculum
NASA Astrophysics Data System (ADS)
Schatz, Michael; Aiken, John; Burk, John; Caballero, Marcos; Douglas, Scott; Thoms, Brian
2012-03-01
We describe the implementation of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). We discuss how VPython allows students to utilize all four structures that describe a model as given by the ASU Modeling Instruction curriculum. Implications for future work will also be discussed.
Introductory Physics Experiments Using the Wiimote
NASA Astrophysics Data System (ADS)
Somers, William; Rooney, Frank; Ochoa, Romulo
2009-03-01
The Wii, a video game console, is a very popular device with millions of units sold worldwide over the past two years. Although computationally it is not a powerful machine, to a physics educator its most important components can be its controllers. The Wiimote (or remote) controller contains three accelerometers, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, any PC with Bluetooth capability can detect the information sent out by the Wiimote. We have designed several experiments for introductory physics courses that make use of the accelerometers and Bluetooth connectivity. We have adapted the Wiimote to measure the: variable acceleration in simple harmonic motion, centripetal and tangential accelerations in circular motion, and the accelerations generated when students lift weights. We present the results of our experiments and compare them with those obtained when using motion and/or force sensors.
ERIC Educational Resources Information Center
School Science Review, 1977
1977-01-01
Includes methods for demonstrating Schlieren effect, measuring refractive index, measuring acceleration, presenting concepts of optics, automatically recording weather, constructing apparaturs for sound experiments, using thermistor thermometers, using the 741 operational amplifier in analog computing, measuring inductance, electronically ringing…
Coupled Mechanical-Electrochemical-Thermal Modeling for Accelerated Design of EV Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanagopalan, Shriram; Zhang, Chao; Kim, Gi-Heon
2015-05-03
This presentation provides an overview of the mechanical electrochemical-thermal (M-ECT) modeling efforts. The physical phenomena occurring in a battery are many and complex and operate at different scales (particle, electrodes, cell, and pack). A better understanding of the interplay between different physics occurring at different scales through modeling could provide insight to design improved batteries for electric vehicles. Work funded by the U.S. DOE has resulted in development of computer-aided engineering (CAE) tools to accelerate electrochemical and thermal design of batteries; mechanical modeling is under way. Three competitive CAE tools are now commercially available.
Chaotic dynamics in accelerator physics
NASA Astrophysics Data System (ADS)
Cary, J. R.
1992-11-01
Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.
NASA Astrophysics Data System (ADS)
Faerber, Christian
2017-10-01
The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel Xeon/FPGA platforms, which are built in general for high performance computing, are also very interesting for the High Energy Physics community.
Launch Pad Physics: Accelerate Interest With Model Rocketry.
ERIC Educational Resources Information Center
Key, LeRoy F.
1982-01-01
Student activities in an interdisciplinary, model rocket science program are described, including the construction of an Ohio Scientific computer system with graphic capabilities for use in the program and cooperative efforts with the Rocket Research Institute. (JN)
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
Multi-GPU Jacobian accelerated computing for soft-field tomography.
Borsic, A; Attardo, E A; Halter, R J
2012-10-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.
Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography
Borsic, A.; Attardo, E. A.; Halter, R. J.
2012-01-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857
NASA Astrophysics Data System (ADS)
Eisenbach, Markus
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Accelerator Based Tools of Stockpile Stewardship
NASA Astrophysics Data System (ADS)
Seestrom, Susan
2017-01-01
The Manhattan Project had to solve difficult challenges in physics and materials science. During the cold war a large nuclear stockpile was developed. In both cases, the approach was largely empirical. Today that stockpile must be certified without nuclear testing, a task that becomes more difficult as the stockpile ages. I will discuss the role of modern accelerator based experiments, such as x-ray radiography, proton radiography, neutron and nuclear physics experiments, in stockpile stewardship. These new tools provide data of exceptional sensitivity and are answering questions about the stockpile, improving our scientific understanding, and providing validation for the computer simulations that are relied upon to certify todays' stockpile.
Medical Applications at CERN and the ENLIGHT Network
Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh
2016-01-01
State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN. PMID:26835422
Medical Applications at CERN and the ENLIGHT Network.
Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh
2016-01-01
State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chuan S.; Shao, Xi
2016-06-14
The main objective of our work is to provide theoretical basis and modeling support for the design and experimental setup of compact laser proton accelerator to produce high quality proton beams tunable with energy from 50 to 250 MeV using short pulse sub-petawatt laser. We performed theoretical and computational studies of energy scaling and Raleigh--Taylor instability development in laser radiation pressure acceleration (RPA) and developed novel RPA-based schemes to remedy/suppress instabilities for high-quality quasimonoenergetic proton beam generation as we proposed. During the project period, we published nine peer-reviewed journal papers and made twenty conference presentations including six invited talks onmore » our work. The project supported one graduate student who received his PhD degree in physics in 2013 and supported two post-doctoral associates. We also mentored three high school students and one undergraduate student of physics major by inspiring their interests and having them involved in the project.« less
Electromagnetic Physics Models for Parallel Computing Architectures
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2016-10-01
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.
Proceedings of the workshop on B physics at hadron accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
McBride, P.; Mishra, C.S.
1993-12-31
This report contains papers on the following topics: Measurement of Angle {alpha}; Measurement of Angle {beta}; Measurement of Angle {gamma}; Other B Physics; Theory of Heavy Flavors; Charged Particle Tracking and Vertexing; e and {gamma} Detection; Muon Detection; Hadron ID; Electronics, DAQ, and Computing; and Machine Detector Interface. Selected papers have been indexed separately for inclusion the in Energy Science and Technology Database.
Late-time structure of the Bunch-Davies FRW wavefunction
NASA Astrophysics Data System (ADS)
Konstantinidis, George; Mahajan, Raghu; Shaghoulian, Edgar
2016-10-01
In this short note we organize a perturbation theory for the Bunch-Davies wavefunction in flat, accelerating cosmologies. The calculational technique avoids the in-in formalism and instead uses an analytic continuation from Euclidean signature. We will consider both massless and conformally coupled self-interacting scalars. These calculations explicitly illustrate two facts. The first is that IR divergences get sharper as the acceleration slows. The second is that UV-divergent contact terms in the Euclidean computation can contribute to the absolute value of the wavefunction in Lorentzian signature. Here UV divergent refers to terms involving inverse powers of the radial cutoff in the Euclidean computation. In Lorentzian signature such terms encode physical time dependence of the wavefunction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shujia; Duffy, Daniel; Clune, Thomas
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less
2004-07-01
steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Message From the Editor for Contributions to the 2012 Real Time Conference Issue of TNS
NASA Astrophysics Data System (ADS)
Schmeling, Sascha Marc
2013-10-01
The papers in this special issue were originally presented at the 18th IEEE-NPSS Real Time Conference (RT2012) on Computing Applications in Nuclear and Plasma Sciences, held in Berkeley, California, USA, in June 2012. These contributions come from a broad range of fields of application, including Astrophysics, Medical Imaging, Nuclear and Plasma Physics, Particle Accelerators, and Particle Physics Experiments.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
Classification of physical activities based on body-segments coordination.
Fradet, Laetitia; Marin, Frederic
2016-09-01
Numerous innovations based on connected objects and physical activity (PA) monitoring have been proposed. However, recognition of PAs requires robust algorithm and methodology. The current study presents an innovative approach for PA recognition. It is based on the heuristic definition of postures and the use of body-segments coordination obtained through external sensors. The first part of this study presents the methodology required to define the set of accelerations which is the most appropriate to represent the particular body-segments coordination involved in the chosen PAs (here walking, running, and cycling). For that purpose, subjects of different ages and heterogeneous physical conditions walked, ran, cycled, and performed daily activities at different paces. From the 3D motion capture, vertical and horizontal accelerations of 8 anatomical landmarks representative of the body were computed. Then, the 680 combinations from up to 3 accelerations were compared to identify the most appropriate set of acceleration to discriminate the PAs in terms of body segment coordinations. The discrimination was based on the maximal Hausdorff Distance obtained between the different set of accelerations. The vertical accelerations of both knees demonstrated the best PAs discrimination. The second step was the proof of concept, implementing the proposed algorithm to classify PAs of new group of subjects. The originality of the proposed algorithm is the possibility to use the subject's specific measures as reference data. With the proposed algorithm, 94% of the trials were correctly classified. In conclusion, our study proposed a flexible and extendable methodology. At the current stage, the algorithm has been shown to be valid for heterogeneous subjects, which suggests that it could be deployed in clinical or health-related applications regardless of the subjects' physical abilities or characteristics. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Klein, K. E.; Backhausen, F.; Bruner, H.; Eichhorn, J.; Jovy, D.; Schotte, J.; Vogt, L.; Wegman, H. M.
1980-01-01
A group of 12 highly trained athletes and a group of 12untrained students were subjected to passive changes of position on a tilt table and positive accelerations in a centrifuge. During a 20 min tilt, including two additional respiratory maneuvers, the number of faints and average cardiovascular responses did not differ significantly between the groups. During linear increase of acceleration, the average blackout level was almost identical in both groups. Statistically significant coefficients of product-moment correlation for various relations were obtained. The coefficient of multiple determination computed for the dependence of acceleration tolerance on heart-eye distance and systolic blood pressure at rest allows the explanation of almost 50% of the variation of acceleration tolerance. The maximum oxygen uptake showed the expected significant correlation to the heart rate at rest, but not the acceleration tolerance, or to the cardiovascular responses to tilting.
Electromagnetic physics models for parallel computing architectures
Amadio, G.; Ananya, A.; Apostolakis, J.; ...
2016-11-21
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less
Feature Masking in Computer Game Promotes Visual Imagery
ERIC Educational Resources Information Center
Smith, Glenn Gordon; Morey, Jim; Tjoe, Edwin
2007-01-01
Can learning of mental imagery skills for visualizing shapes be accelerated with feature masking? Chemistry, physics fine arts, military tactics, and laparoscopic surgery often depend on mentally visualizing shapes in their absence. Does working with "spatial feature-masks" (skeletal shapes, missing key identifying portions) encourage people to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willert, Jeffrey; Taitano, William T.; Knoll, Dana
In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less
Physics through the 1990s: Gravitation, cosmology and cosmic-ray physics
NASA Technical Reports Server (NTRS)
1986-01-01
The volume contains recommendations for space-and ground-based programs in gravitational physics, cosmology, and cosmic-ray physics. The section on gravitation examines current and planned experimental tests of general relativity; the theory behind, and search for, gravitational waves, including sensitive laser-interferometric tests and other observations; and advances in gravitation theory (for example, incorporating quantum effects). The section on cosmology deals with the big-bang model, the standard model from elementary-particle theory, the inflationary model of the Universe. Computational needs are presented for both gravitation and cosmology. Finally, cosmic-ray physics theory (nucleosynthesis, acceleration models, high-energy physics) and experiment (ground and spaceborne detectors) are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A.; Barnard, J. J.; Cohen, R. H.
The Heavy Ion Fusion Science Virtual National Laboratory(a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the"warm dense matter" regime at<~;; 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Test Accelerator at LLNL,more » NDCX-II will compress a ~;;500 ns pulse of Li+ ions to ~;;1 ns while accelerating it to 3-4 MeV over ~;;15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A; Barnard, J J; Cohen, R H
The Heavy Ion Fusion Science Virtual National Laboratory (a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the 'warm dense matter' regime at {approx}< 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Testmore » Accelerator at LLNL, NDCX-II will compress a {approx}500 ns pulse of Li{sup +} ions to {approx} 1 ns while accelerating it to 3-4 MeV over {approx} 15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.« less
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster
NASA Astrophysics Data System (ADS)
Gertsenberger, Konstantin; Rogachevsky, Oleg
2018-02-01
Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
Summary Report of Working Group 2: Computation
NASA Astrophysics Data System (ADS)
Stoltz, P. H.; Tsung, R. S.
2009-01-01
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.
Summary Report of Working Group 2: Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltz, P. H.; Tsung, R. S.
2009-01-22
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
High-Speed Video Analysis in a Conceptual Physics Class
NASA Astrophysics Data System (ADS)
Desbien, Dwain M.
2011-09-01
The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.
Computation of linear acceleration through an internal model in the macaque cerebellum
Laurens, Jean; Meng, Hui; Angelaki, Dora E.
2013-01-01
A combination of theory and behavioral findings has supported a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Here we exploit un-natural motion stimuli, which induce incorrect self-motion perception and eye movements, to explore the neural correlates of an internal model proposed to compensate for Einstein’s equivalence principle and generate neural estimates of linear acceleration and gravity. We show that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encode erroneous linear acceleration, as expected from the internal model hypothesis, even when no actual linear acceleration occurs. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized by theorists. PMID:24077562
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Solving global shallow water equations on heterogeneous supercomputers
Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen
2017-01-01
The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428
plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry
NASA Astrophysics Data System (ADS)
Venkattraman, Ayyaswamy; Verma, Abhishek Kumar
2016-09-01
As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
2011-01-01
other mechanism ? What accelerates the solar wind? What are the near- Sun plasma properties (particle density, magnetic field)? Does the solar wind come...microstructure character iza tion, elec tronic ceramics, solid-state physics, fiber optics, electro-optics, microelectronics, fracture mechan ics...computational fluid mechanics , experi mental structural mechanics , solid me chan ics, elastic/plastic fracture mechanics , materials, finite-element
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Comparison Between THOR Anthropomorphic Test Device and THOR Finite Element Model
NASA Technical Reports Server (NTRS)
Moore, Erik
2014-01-01
Extended time spent in reduced gravity can cause physiologic deconditioning of astronauts, reducing their ability to sustain excessive forces during dynamic phases of spaceflight such as landing. To make certain that the crew is safe during these phases, NASA must take caution when determining what types of landings are acceptable based on the accelerations applied to the astronaut. In order to test acceptable landings, various trials have been run accelerating humans, cadavers, and Anthropomorphic Test Devices (ATDs), or crash test dummies, at different acceleration and velocity rates on a sled testing platform. Using these tests, risks of injury will be created and metrics will be developed for the likelihood of injuries due to the acceleration. A finite element model (FEM) of the Test Device for Human Occupant Restraint (THOR) ATD has been developed that can simulate these test trials and others (Putnam, 2014), reducing the need for human and ATD testing. Additionally, this will give researchers a more effective way to test the accelerations and orientations encountered during spaceflight landings during design of new space vehicles for crewed missions. However, the FEM has not been proven and must be validated by comparing the forces, accelerations, and other measurements of all parts of the body between the physical tests already completed and computer simulated trials. The purpose of my research was to validate the FEM for the ATD using previously run trials with the physical THOR ATD.
Fermilab computing at the Intensity Frontier
Group, Craig; Fuess, S.; Gutsche, O.; ...
2015-12-23
The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less
EuCARD2: enhanced accelerator research and development in Europe
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2013-10-01
Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. EuCARD2 is an European research project which will be realized during 2013-2017 inside the EC FP7 framework. The project concerns the development and coordination of European Accelerator Research and Development. The project is particularly important, to a number of domestic laboratories, due to some plans to build large accelerator infrastructure in Poland. Large accelerator infrastructure of fundamental and applied research character stimulates around it the development and industrial applications as well as biomedical of advanced accelerators, material research and engineering, cryo-technology, mechatronics, robotics, and in particular electronics - like networked measurement and control systems, sensors, computer systems, automation and control systems. The paper presents a digest of the European project EuCARD2 which is Enhanced European Coordination for Accelerator Research and Development. The paper presents a digest of the research results and assumptions in the domain of accelerator science and technology in Europe, shown during the final fourth annual meeting of the EuCARD - European Coordination of Accelerator R&D, and the kick-off meeting of the EuCARD2. There are debated a few basic groups of accelerator systems components like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution, high field magnets, superconducting cavities, novel beam collimators, etc. The paper bases on the following materials: Internet and Intranet documents combined with EuCARD2, Description of Work FP7 EuCARD-2 DoW-312453, 2013-02-13, and discussions and preparatory materials worked on by Eucard-2 initiators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
Personal supercomputing by using transputer and Intel 80860 in plasma engineering
NASA Astrophysics Data System (ADS)
Ido, S.; Aoki, K.; Ishine, M.; Kubota, M.
1992-09-01
Transputer (T800) and 64-bit RISC Intel 80860 (i860) added on a personal computer can be used as an accelerator. When 32-bit T800s in a parallel system or 64-bit i860s are used, scientific calculations are carried out several ten times as fast as in the case of commonly used 32-bit personal computers or UNIX workstations. Benchmark tests and examples of physical simulations using T800s and i860 are reported.
[Experimental nuclear physics]. Annual report 1988
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1988-05-01
This is the May 1988 annual report of the Nuclear Physics Laboratory of the University of Washington. It contains chapters on astrophysics, giant resonances, heavy ion induced reactions, fundamental symmetries, polarization in nuclear reactions, medium energy reactions, accelerator mass spectrometry (AMS), research by outside users, Van de Graaff and ion sources, the Laboratory`s booster linac project work, instrumentation, and computer systems. An appendix lists Laboratory personnel, Ph.D. degrees granted in the 1987-88 academic year, and publications. Refs., 27 figs., 4 tabs.
[Experimental nuclear physics]. Annual report 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1989-04-01
This is the April 1989 annual report of the Nuclear Physics Labortaory of the University of Washington. It contains chapters on astrophysics, giant resonances, heavy ion induced reactions, fundamental symmetries, polarization in nuclear reactions, medium energy reactions, accelerator mass spectrometry (AMS), research by outside users, Van de Graaff and ion sources, computer systems, instrumentation, and the Laboratory`s booster linac work. An appendix lists Laboratory personnel, Ph.D. degrees granted in the 1988-1989 academic year, and publications. Refs., 23 figs., 3 tabs.
NASA Astrophysics Data System (ADS)
Uzdensky, Dmitri
Relativistic astrophysical plasma environments routinely produce intense high-energy emission, which is often observed to be nonthermal and rapidly flaring. The recently discovered gamma-ray (> 100 MeV) flares in Crab Pulsar Wind Nebula (PWN) provide a quintessential illustration of this, but other notable examples include relativistic active galactic nuclei (AGN) jets, including blazars, and Gamma-ray Bursts (GRBs). Understanding the processes responsible for the very efficient and rapid relativistic particle acceleration and subsequent emission that occurs in these sources poses a strong challenge to modern high-energy astrophysics, especially in light of the necessity to overcome radiation reaction during the acceleration process. Magnetic reconnection and collisionless shocks have been invoked as possible mechanisms. However, the inferred extreme particle acceleration requires the presence of coherent electric-field structures. How such large-scale accelerating structures (such as reconnecting current sheets) can spontaneously arise in turbulent astrophysical environments still remains a mystery. The proposed project will conduct a first-principles computational and theoretical study of kinetic turbulence in relativistic collisionless plasmas with a special focus on nonthermal particle acceleration and radiation emission. The main computational tool employed in this study will be the relativistic radiative particle-in-cell (PIC) code Zeltron, developed by the team members at the Univ. of Colorado. This code has a unique capability to self-consistently include the synchrotron and inverse-Compton radiation reaction force on the relativistic particles, while simultaneously computing the resulting observable radiative signatures. This proposal envisions performing massively parallel, large-scale three-dimensional simulations of driven and decaying kinetic turbulence in physical regimes relevant to real astrophysical systems (such as the Crab PWN), including the radiation reaction effects. In addition to measuring the general fluid-level statistical properties of kinetic turbulence (e.g., the turbulent spectrum in the inertial and sub-inertial range), as well as the overall energy dissipation and particle acceleration, the proposed study will also investigate their intermittency and time variability, resulting in direction- and time-resolved emitted photon spectra and direction- and energy-resolved light curves, which can then be compared with observations. To gain deeper physical insight into the intermittent particle acceleration processes in turbulent astrophysical environments, the project will also identify and analyze statistically the current sheets, shocks, and other relevant localized particle-acceleration structures found in the simulations. In particular, it will assess whether relativistic kinetic turbulence in PWN can self-consistently generate such structures that are long and strong enough to accelerate large numbers of particles to the PeV energies required to explain the Crab gamma-ray flares, and where and under what conditions such acceleration can occur. The results of this research will also advance our understanding the origin of ultra-rapid TeV flares in blazar jets and will have important implications for GRB prompt emission, as well as AGN radio-lobes and radiatively-inefficient accretion flows, such as the flow onto the supermassive black hole at our Galactic Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calhoon, E.C.; Starring, P.W. eds.
1959-08-01
Lectures given at the Ernest 0. Lawrence Radiation Laboratory on physics, biophysics, and chemistry for high school science teachers are presented. Topics covered include a mathematics review, atomic physics, nuclear physics, solid-state physics, elementary particles, antiparticies, design of experiments, high-energy particle accelerators, survey of particle detectors, emulsion as a particle detector, counters used in high-energy physics, bubble chambers, computer programming, chromatography, the transuranium elements, health physics, photosynthesis, the chemistry and physics of virus, the biology of virus, lipoproteins and heart disease, origin and evolution of the solar system, the role of space satellites in gathering astronomical data, and radiation andmore » life in space. (M.C.G.)« less
Controlling flexible robot arms using a high speed dynamics process
NASA Technical Reports Server (NTRS)
Jain, Abhinandan (Inventor); Rodriguez, Guillermo (Inventor)
1992-01-01
Described here is a robot controller for a flexible manipulator arm having plural bodies connected at respective movable hinges, and flexible in plural deformation modes. It is operated by computing articulated body qualities for each of the bodies from the respective modal spatial influence vectors, obtaining specified body forces for each of the bodies, and computing modal deformation accelerations of the nodes and hinge accelerations of the hinges from the specified body forces, from the articulated body quantities and from the modal spatial influence vectors. In one embodiment of the invention, the controller further operates by comparing the accelerations thus computed to desired manipulator motion to determine a motion discrepancy, and correcting the specified body forces so as to reduce the motion discrepancy. The manipulator bodies and hinges are characterized by respective vectors of deformation and hinge configuration variables. Computing modal deformation accelerations and hinge accelerations is carried out for each of the bodies, beginning with the outermost body by computing a residual body force from a residual body force of a previous body, computing a resultant hinge acceleration from the body force, and then, for each one of the bodies beginning with the innermost body, computing a modal body acceleration from a modal body acceleration of a previous body, computing a modal deformation acceleration and hinge acceleration from the resulting hinge acceleration and from the modal body acceleration.
Methods of geometrical integration in accelerator physics
NASA Astrophysics Data System (ADS)
Andrianov, S. N.
2016-12-01
In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Overview of the SHIELDS Project at LANL
NASA Astrophysics Data System (ADS)
Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, D.; Vernon, L.; Woodroffe, J. R.; Toth, G.; Welling, D. T.; Yu, Y.; Birn, J.; Thomsen, M. F.; Borovsky, J.; Denton, M.; Albert, J.; Horne, R. B.; Lemon, C. L.; Markidis, S.; Young, S. L.
2015-12-01
The near-Earth space environment is a highly dynamic and coupled system through a complex set of physical processes over a large range of scales, which responds nonlinearly to driving by the time-varying solar wind. Predicting variations in this environment that can affect technologies in space and on Earth, i.e. "space weather", remains a big space physics challenge. We present a recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program that is developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to specify the dynamics of the hot (keV) particles (the seed population for the radiation belts) on both macro- and micro-scale, including important physics of rapid particle injection and acceleration associated with magnetospheric storms/substorms and plasma waves. This challenging problem is addressed using a team of world-class experts in the fields of space science and computational plasma physics and state-of-the-art models and computational facilities. New data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed in addition to physics-based models. This research will provide a framework for understanding of key radiation belt drivers that may accelerate particles to relativistic energies and lead to spacecraft damage and failure. The ability to reliably distinguish between various modes of failure is critically important in anomaly resolution and forensics. SHIELDS will enhance our capability to accurately specify and predict the near-Earth space environment where operational satellites reside.
Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments
NASA Technical Reports Server (NTRS)
Manning, Ted A.; Lawrence, Scott L.
2014-01-01
As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.
Fluid Physics Under a Stochastic Acceleration Field
NASA Technical Reports Server (NTRS)
Vinals, Jorge
2001-01-01
The research summarized in this report has involved a combined theoretical and computational study of fluid flow that results from the random acceleration environment present onboard space orbiters, also known as g-jitter. We have focused on a statistical description of the observed g-jitter, on the flows that such an acceleration field can induce in a number of experimental configurations of interest, and on extending previously developed methodology to boundary layer flows. Narrow band noise has been shown to describe many of the features of acceleration data collected during space missions. The scale of baroclinically induced flows when the driving acceleration is random is not given by the Rayleigh number. Spatially uniform g-jitter induces additional hydrodynamic forces among suspended particles in incompressible fluids. Stochastic modulation of the control parameter shifts the location of the onset of an oscillatory instability. Random vibration of solid boundaries leads to separation of boundary layers. Steady streaming ahead of a modulated solid-melt interface enhances solute transport, and modifies the stability boundaries of a planar front.
Anderson acceleration and application to the three-temperature energy equations
NASA Astrophysics Data System (ADS)
An, Hengbin; Jia, Xiaowei; Walker, Homer F.
2017-10-01
The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.
Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2014-10-01
The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.
Theoretical and Computational Investigation of High-Brightness Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chiping
Theoretical and computational investigations of adiabatic thermal beams have been carried out in parameter regimes relevant to the development of advanced high-brightness, high-power accelerators for high-energy physics research and for various applications such as light sources. Most accelerator applications require high-brightness beams. This is true for high-energy accelerators such as linear colliders. It is also true for energy recovery linacs (ERLs) and free electron lasers (FELs) such as x-ray free electron lasers (XFELs). The breakthroughs and highlights in our research in the period from February 1, 2013 to November 30, 2013 were: a) Completion of a preliminary theoretical and computationalmore » study of adiabatic thermal Child-Langmuir flow (Mok, 2013); and b) Presentation of an invited paper entitled ?Adiabatic Thermal Beams in a Periodic Focusing Field? at Space Charge 2013 Workshop, CERN, April 16-19, 2013 (Chen, 2013). In this report, an introductory background for the research project is provided. Basic theory of adiabatic thermal Child-Langmuir flow is reviewed. Results of simulation studies of adiabatic thermal Child-Langmuir flows are discussed.« less
Advanced Design Concepts for Dense Plasma Focus Devices at LLNL
NASA Astrophysics Data System (ADS)
Povilus, Alexander; Podpaly, Yuri; Cooper, Christopher; Shaw, Brian; Chapman, Steve; Mitrani, James; Anderson, Michael; Pearson, Aric; Anaya, Enrique; Koh, Ed; Falabella, Steve; Link, Tony; Schmidt, Andrea
2017-10-01
The dense plasma focus (DPF) is a z-pinch device where a plasma sheath is accelerated down a coaxial railgun and ends in a radial implosion, pinch phase. During the pinch phase, the plasma generates intense, transient electric fields through physical mechanisms, similar to beam instabilities, that can accelerate ions in the plasma sheath to MeV-scale energies on millimeter length scales. Using kinetic modeling techniques developed at LLNL, we have gained insight into the formation of these accelerating fields and are using these observations to optimize the behavior of the generated ion beam for producing neutrons via beam-target interactions for kilojoule to megajoule-scale devices. Using a set of DPF's, both in operation and in development at LLNL, we have explored critical aspects of these devices, including plasma sheath formation behavior, power delivery to the plasma, and instability seeding during the implosion in order to improve the absolute yield and stability of the device. Prepared by LLNL under Contract DE-AC52-07NA27344. Computing support for this work came from the LLNL Institutional Computing Grand Challenge program.
Physical Scaffolding Accelerates the Evolution of Robot Behavior.
Buckingham, David; Bongard, Josh
2017-01-01
In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.
Toward GPGPU accelerated human electromechanical cardiac simulations
Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David
2014-01-01
In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492
Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping
2014-07-04
Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing.
Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping
2014-01-01
Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing. PMID:24993440
Molecular dynamics simulations through GPU video games technologies
Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia
2016-01-01
Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251
NASA Astrophysics Data System (ADS)
Arons, Jonathan
The research proposed addresses understanding of the origin of non-thermal energy in the Universe, a subject beginning with the discovery of Cosmic Rays and continues, including the study of relativistic compact objects - neutron stars and black holes. Observed Rotation Powered Pulsars (RPPs) have rotational energy loss implying they have TeraGauss magnetic fields and electric potentials as large as 40 PetaVolts. The rotational energy lost is reprocessed into particles which manifest themselves in high energy gamma ray photon emission (GeV to TeV). Observations of pulsars from the FERMI Gamma Ray Observatory, launched into orbit in 2008, have revealed 130 of these stars (and still counting), thus demonstrating the presence of efficient cosmic accelerators within the strongly magnetized regions surrounding the rotating neutron stars. Understanding the physics of these and other Cosmic Accelerators is a major goal of astrophysical research. A new model for particle acceleration in the current sheets separating the closed and open field line regions of pulsars' magnetospheres, and separating regions of opposite magnetization in the relativistic winds emerging from those magnetopsheres, will be developed. The currents established in recent global models of the magnetosphere will be used as input to a magnetic field aligned acceleration model that takes account of the current carrying particles' inertia, generalizing models of the terrestrial aurora to the relativistic regime. The results will be applied to the spectacular new results from the FERMI gamma ray observatory on gamma ray pulsars, to probe the physics of the generation of the relativistic wind that carries rotational energy away from the compact stars, illuminating the whole problem of how compact objects can energize their surroundings. The work to be performed if this proposal is funded involves extending and developing concepts from plasma physics on dissipation of magnetic energy in thin sheets of electric current that separate regions of differing magnetization into the domain of highly relativistic magnetic fields - those with energy density large compared to the rest mass energy of the charged particles - the plasma - caught in that field. The investigators will create theoretical and computational models of the magnetic dissipation - a form of viscous flow in the thin sheets of electric current that form in the magnetized regions around the rotating stars - using Particle in-Cell plasma simulations. These simulations use a large computer to solve the equations of motion of many charged particles - millions to billions in the research that will be pursued - to unravel the dissipation of those fields and the acceleration of beams of particles in the thin sheets. The results will be incorporated into macroscopic MHD models of the magnetic structures around the stars which determine the location and strength of the current sheets, so as to model and analyze the pulsed gamma ray emission seen from hundreds of Rotation Powered Pulsars. The computational models will be assisted by ``pencil and paper'' theoretical modeling designed to motivate and interpret the computer simulations, and connect them to the observations.
Comptational Design Of Functional CA-S-H and Oxide Doped Alloy Systems
NASA Astrophysics Data System (ADS)
Yang, Shizhong; Chilla, Lokeshwar; Yang, Yan; Li, Kuo; Wicker, Scott; Zhao, Guang-Lin; Khosravi, Ebrahim; Bai, Shuju; Zhang, Boliang; Guo, Shengmin
Computer aided functional materials design accelerates the discovery of novel materials. This presentation will cover our recent research advance on the Ca-S-H system properties prediction and oxide doped high entropy alloy property simulation and experiment validation. Several recent developed computational materials design methods were utilized to the two systems physical and chemical properties prediction. A comparison of simulation results to the corresponding experiment data will be introduced. This research is partially supported by NSF CIMM project (OIA-15410795 and the Louisiana BoR), NSF HBCU Supplement climate change and ecosystem sustainability subproject 3, and LONI high performance computing time allocation loni mat bio7.
Graphics Processing Units for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.
2016-07-01
General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.
Identifying Wave-Particle Interactions in the Solar Wind using Statistical Correlations
NASA Astrophysics Data System (ADS)
Broiles, T. W.; Jian, L. K.; Gary, S. P.; Lepri, S. T.; Stevens, M. L.
2017-12-01
Heavy ions are a trace component of the solar wind, which can resonate with plasma waves, causing heating and acceleration relative to the bulk plasma. While wave-particle interactions are generally accepted as the cause of heavy ion heating and acceleration, observations to constrain the physics are lacking. In this work, we statistically link specific wave modes to heavy ion heating and acceleration. We have computed the Fast Fourier Transform (FFT) of transverse and compressional magnetic waves between 0 and 5.5 Hz using 9 days of ACE and Wind Magnetometer data. The FFTs are averaged over plasma measurement cycles to compute statistical correlations between magnetic wave power at each discrete frequency, and ion kinetic properties measured by ACE/SWICS and Wind/SWE. The results show that lower frequency transverse oscillations (< 0.2 Hz) and higher frequency compressional oscillations (> 0.4 Hz) are positively correlated with enhancements in the heavy ion thermal and drift speeds. Moreover, the correlation results for the He2+ and O6+ were similar on most days. The correlations were often weak, but most days had some frequencies that correlated with statistical significance. This work suggests that the solar wind heavy ions are possibly being heated and accelerated by both transverse and compressional waves at different frequencies.
Report on the solar physics-plasma physics workshop
NASA Technical Reports Server (NTRS)
Sturrock, P. A.; Baum, P. J.; Beckers, J. M.; Newman, C. E.; Priest, E. R.; Rosenberg, H.; Smith, D. F.; Wentzel, D. G.
1976-01-01
The paper summarizes discussions held between solar physicists and plasma physicists on the interface between solar and plasma physics, with emphasis placed on the question of what laboratory experiments, or computer experiments, could be pursued to test proposed mechanisms involved in solar phenomena. Major areas discussed include nonthermal plasma on the sun, spectroscopic data needed in solar plasma diagnostics, types of magnetic field structures in the sun's atmosphere, the possibility of MHD phenomena involved in solar eruptive phenomena, the role of non-MHD instabilities in energy release in solar flares, particle acceleration in solar flares, shock waves in the sun's atmosphere, and mechanisms of radio emission from the sun.
CUDAEASY - a GPU accelerated cosmological lattice program
NASA Astrophysics Data System (ADS)
Sainio, J.
2010-05-01
This paper presents, to the author's knowledge, the first graphics processing unit (GPU) accelerated program that solves the evolution of interacting scalar fields in an expanding universe. We present the implementation in NVIDIA's Compute Unified Device Architecture (CUDA) and compare the performance to other similar programs in chaotic inflation models. We report speedups between one and two orders of magnitude depending on the used hardware and software while achieving small errors in single precision. Simulations that used to last roughly one day to compute can now be done in hours and this difference is expected to increase in the future. The program has been written in the spirit of LATTICEEASY and users of the aforementioned program should find it relatively easy to start using CUDAEASY in lattice simulations. The program is available at http://www.physics.utu.fi/theory/particlecosmology/cudaeasy/ under the GNU General Public License.
Checkpointing for a hybrid computing node
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cher, Chen-Yong
2016-03-08
According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.
Accelerating molecular dynamic simulation on the cell processor and Playstation 3.
Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S
2009-01-30
Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.
Ion acceleration via TNSA near and beyond the relativistic transparency limit
NASA Astrophysics Data System (ADS)
Schumacher, Douglass; Poole, Patrick; Cochran, Ginevra; Willis, Christopher
2017-10-01
Ultra-intense laser-based ion acceleration can proceed via several mechanisms whose fundamental operation and interplay with each other are still not well understood. The details of Relativistically Induced Transparency (RIT) and its impact on ultra-thin target acceleration are of interest for fundamental studies and to progress toward applications requiring controlled, high energy secondary radiation, e.g. hadron cancer therapy. Liquid crystal film targets formed in-situ with thickness control between 10 nm and > 50 μm uniquely allow study of how ion acceleration varies with target thickness. Several recent studies have investigated Target Normal Sheath Acceleration (TNSA) down to the thickness at which RIT occurs, with a wide range of laser conditions (energy, pulse duration, and contrast), using various ion and optical diagnostics to ascertain acceleration mechanisms and quality. Observation of target-normal directed ion acceleration enhancement at the RIT thickness onset will be discussed, including analysis of ion spatial and spectral features as well as particle-in-cell simulations investigating the underlying physical processes. This material is based upon work supported by the AFOSR under Award Number FA9550-14-1-0085, by the NNSA under DE-NA0003107, and by computing time from the Ohio Supercomputer Center.
The Scanning Electron Microscope As An Accelerator For The Undergraduate Advanced Physics Laboratory
NASA Astrophysics Data System (ADS)
Peterson, Randolph S.; Berggren, Karl K.; Mondol, Mark
2011-06-01
Few universities or colleges have an accelerator for use with advanced physics laboratories, but many of these institutions have a scanning electron microscope (SEM) on site, often in the biology department. As an accelerator for the undergraduate, advanced physics laboratory, the SEM is an excellent substitute for an ion accelerator. Although there are no nuclear physics experiments that can be performed with a typical 30 kV SEM, there is an opportunity for experimental work on accelerator physics, atomic physics, electron-solid interactions, and the basics of modern e-beam lithography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Scientific Discovery through Advanced Computing in Plasma Science
NASA Astrophysics Data System (ADS)
Tang, William
2005-03-01
Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.
Design of experiments in medical physics: Application to the AAA beam model validation.
Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D
2017-09-01
The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
RESEARCH AREA 7.1: Exploring the Systematics of Controlling Quantum Phenomena
2016-10-05
the bottom to the top of the landscape. Computational analyses for simple model quantum systems are performed to ascertain the relative abundance of...SECURITY CLASSIFICATION OF: This research is concerned with the theoretical and experimental control quantum dynamics phenomena. Advances include new...algorithms to accelerate quantum control as well as provide physical insights into the controlled dynamics. The latter research includes the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alekhin, S.I.; Ezhela, V.V.; Filimonov, B.B.
We present an indexed guide to the literature experimental particle physics for the years 1988--1992. About 4,000 papers are indexed by Beam/Target/Momentum, Reaction Momentum (including the final state), Final State Particle, and Accelerator/Detector/Experiment. All indices are cross-referenced to the paper`s title and reference in the ID/Reference/Title Index. The information in this guide is also publicly available from a regularly updated computer database.
FERMILAB ACCELERATOR R&D PROGRAM TOWARDS INTENSITY FRONTIER ACCELERATORS : STATUS AND PROGRESS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir
2016-11-15
The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centrepiece of the US domestic HEP program at Fermilab. Operation, upgrade and development of the accelerators for the near- term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators and present its status and progress. INTENSITY FRONTIER ACCELERATORS
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Li, Z.; Ng, C.
The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code
NASA Astrophysics Data System (ADS)
Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.
2012-12-01
We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Computed torque control of a free-flying cooperat ing-arm robot
NASA Technical Reports Server (NTRS)
Koningstein, Ross; Ullman, Marc; Cannon, Robert H., Jr.
1989-01-01
The unified approach to solving free-floating space robot manipulator end-point control problems is presented using a control formulation based on an extension of computed torque. Once the desired end-point accelerations have been specified, the kinematic equations are used with momentum conservation equations to solve for the joint accelerations in any of the robot's possible configurations: fixed base or free-flying with open/closed chain grasp. The joint accelerations can then be used to calculate the arm control torques and internal forces using a recursive order N algorithm. Initial experimental verification of these techniques has been performed using a laboratory model of a two-armed space robot. This fully autonomous spacecraft system experiences the drag-free, zero G characteristics of space in two dimensions through the use of an air cushion support system. Results of these initial experiments are included which validate the correctness of the proposed methodology. The further problem of control in the large where not only the manipulator tip positions but the entire system consisting of base and arms must be controlled is also presented. The availability of a physical testbed has brought a keener insight into the subtleties of the problem at hand.
The impact of Nordic walking training on the gait of the elderly.
Ben Mansour, Khaireddine; Gorce, Philippe; Rezzoug, Nasser
2018-03-27
The purpose of the current study was to define the impact of regular practice of Nordic walking on the gait of the elderly. Thereby, we aimed to determine whether the gait characteristics of active elderly persons practicing Nordic walking are more similar to healthy adults than that of the sedentary elderly. Comparison was made based on parameters computed from three inertial sensors during walking at a freely chosen velocity. Results showed differences in gait pattern in terms of the amplitude computed from acceleration and angular velocity at the lumbar region (root mean square), the distribution (Skewness) quantified from the vertical and Euclidean norm of the lumbar acceleration, the complexity (Sample Entropy) of the mediolateral component of lumbar angular velocity and the Euclidean norm of the shank acceleration and angular velocity, the regularity of the lower limbs, the spatiotemporal parameters and the variability (standard deviation) of stance and stride durations. These findings reveal that the pattern of active elderly differs significantly from sedentary elderly of the same age while similarity was observed between the active elderly and healthy adults. These results advance that regular physical activity such as Nordic walking may counteract the deterioration of gait quality that occurs with aging.
NASA Astrophysics Data System (ADS)
Gröber, S.; Vetter, M.; Eckert, B.; Jodl, H.-J.
2007-05-01
We suggest that different string pendulums are positioned at different locations on Earth and measure at each place the gravitational acceleration (accuracy Δg ~ 0.01 m s-2). Each pendulum can be remotely controlled via the internet by a computer located somewhere on Earth. The theoretical part describes the physical origin of this phenomenon g(phiv), that the Earth's effective gravitational acceleration g depends on the angle of latitude phiv. Then, we present all necessary formula to deduce g(phiv) from oscillations of a string pendulum. The technical part explains tips and tricks to realize such an apparatus to measure all necessary values with sufficient accuracy. In addition, we justify the precise dimensions of a physical pendulum such that the formula for a mathematical pendulum is applicable to determine g(phiv) without introducing errors. To conclude, we describe the internet version—the string pendulum as a remotely controlled laboratory. The teaching relevance and educational value will be discussed in detail at the end of this paper including global experimenting, using the internet and communication techniques in teaching and new ways of teaching and learning methods.
Resource Letter AFHEP-1: Accelerators for the Future of High-Energy Physics
NASA Astrophysics Data System (ADS)
Barletta, William A.
2012-02-01
This Resource Letter provides a guide to literature concerning the development of accelerators for the future of high-energy physics. Research articles, books, and Internet resources are cited for the following topics: motivation for future accelerators, present accelerators for high-energy physics, possible future machine, and laboratory and collaboration websites.
Time-dependent Electron Acceleration in Blazar Transients: X-Ray Time Lags and Spectral Formation
NASA Astrophysics Data System (ADS)
Lewis, Tiffany R.; Becker, Peter A.; Finke, Justin D.
2016-06-01
Electromagnetic radiation from blazar jets often displays strong variability, extending from radio to γ-ray frequencies. In a few cases, this variability has been characterized using Fourier time lags, such as those detected in the X-rays from Mrk 421 using BeppoSAX. The lack of a theoretical framework to interpret the data has motivated us to develop a new model for the formation of the X-ray spectrum and the time lags in blazar jets based on a transport equation including terms describing stochastic Fermi acceleration, synchrotron losses, shock acceleration, adiabatic expansion, and spatial diffusion. We derive the exact solution for the Fourier transform of the electron distribution and use it to compute the Fourier transform of the synchrotron radiation spectrum and the associated X-ray time lags. The same theoretical framework is also used to compute the peak flare X-ray spectrum, assuming that a steady-state electron distribution is achieved during the peak of the flare. The model parameters are constrained by comparing the theoretical predictions with the observational data for Mrk 421. The resulting integrated model yields, for the first time, a complete first-principles physical explanation for both the formation of the observed time lags and the shape of the peak flare X-ray spectrum. It also yields direct estimates of the strength of the shock and the stochastic magnetohydrodynamical wave acceleration components in the Mrk 421 jet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svetlana Shasharina
The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.
Translations on USSR Science and Technology Physical Sciences and Technology No. 7
1977-02-28
cybernetics. [Answer] Immediately after the war , when the restoration of the national economy, which had been wrecked by the enemy, was started, Soviet...cyberneticization of economics and science will be developed at accelerated rates. 8545 CSO: 1870 CYBERNETICS, COMPUTERS AND AUTOMATION TECHNOLOGY...working storage of the machine exceeds 64 thousand alpha-numeric characters. Communication with the external world is effected by means of a main
Analytical investigation of the dynamics of tethered constellations in earth orbit
NASA Technical Reports Server (NTRS)
Lorenzini, Enrico C.; Gullahorn, Gordon E.; Estes, Robert D.
1988-01-01
This Quarterly Report on Tethering in Earth Orbit deals with three topics: (1) Investigation of the propagation of longitudinal and transverse waves along the upper tether. Specifically, the upper tether is modeled as three massive platforms connected by two perfectly elastic continua (tether segments). The tether attachment point to the station is assumed to vibrate both longitudinally and transversely at a given frequency. Longitudinal and transverse waves propagate along the tethers affecting the acceleration levels at the elevator and at the upper platform. The displacement and acceleration frequency-response functions at the elevator and at the upper platform are computed for both longitudinal and transverse waves. An analysis to optimize the damping time of the longitudinal dampers is also carried out in order to select optimal parameters. The analytical evaluation of the performance of tuned vs. detuned longitudinal dampers is also part of this analysis. (2) The use of the Shuttle primary Reaction Control System (RCS) thrusters for blowing away a recoiling broken tether is discussed. A microcomputer system was set up to support this operation. (3) Most of the effort in the tether plasma physics study was devoted to software development. A particle simulation code has been integrated into the Macintosh II computer system and will be utilized for studying the physics of hollow cathodes.
NASA Astrophysics Data System (ADS)
Strauss, R. Du Toit; Effenberger, Frederic
2017-10-01
In this review, an overview of the recent history of stochastic differential equations (SDEs) in application to particle transport problems in space physics and astrophysics is given. The aim is to present a helpful working guide to the literature and at the same time introduce key principles of the SDE approach via "toy models". Using these examples, we hope to provide an easy way for newcomers to the field to use such methods in their own research. Aspects covered are the solar modulation of cosmic rays, diffusive shock acceleration, galactic cosmic ray propagation and solar energetic particle transport. We believe that the SDE method, due to its simplicity and computational efficiency on modern computer architectures, will be of significant relevance in energetic particle studies in the years to come.
NASA Astrophysics Data System (ADS)
Pi, E. I.; Siegel, E.
2010-03-01
Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).
NASA Astrophysics Data System (ADS)
Kutt, P. H.; Balamuth, D. P.
1989-10-01
Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.
Simulation of General Physics laboratory exercise
NASA Astrophysics Data System (ADS)
Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.
2015-01-01
Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.
Bayesian analysis of caustic-crossing microlensing events
NASA Astrophysics Data System (ADS)
Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.
2010-06-01
Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.Y.; Tepikian, S.
1985-01-01
Nonlinear magnetic forces become more important for particles in the modern large accelerators. These nonlinear elements are introduced either intentionally to control beam dynamics or by uncontrollable random errors. Equations of motion in the nonlinear Hamiltonian are usually non-integrable. Because of the nonlinear part of the Hamiltonian, the tune diagram of accelerators is a jungle. Nonlinear magnet multipoles are important in keeping the accelerator operation point in the safe quarter of the hostile jungle of resonant tunes. Indeed, all the modern accelerator designs have taken advantages of nonlinear mechanics. On the other hand, the effect of the uncontrollable random multipolesmore » should be evaluated carefully. A powerful method of studying the effect of these nonlinear multipoles is using a particle tracking calculation, where a group of test particles are tracing through these magnetic multipoles in the accelerator hundreds to millions of turns in order to test the dynamical aperture of the machine. These methods are extremely useful in the design of a large accelerator such as SSC, LEP, HERA and RHIC. These calculations unfortunately take a tremendous amount of computing time. In this review the method of determining chaotic orbit and applying the method to nonlinear problems in accelerator physics is discussed. We then discuss the scaling properties and effect of random sextupoles.« less
GPU COMPUTING FOR PARTICLE TRACKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Song, Kai; Muriki, Krishna
2011-03-25
This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less
Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siemann, R.H.; /SLAC
Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.
NASA Astrophysics Data System (ADS)
Mishra, Rohini
Present ultra high power lasers are capable of producing high energy density (HED) plasmas, in controlled way, with a density greater than solid density and at a high temperature of keV (1 keV ˜ 11,000,000° K). Matter in such extreme states is particularly interesting for (HED) physics such as laboratory studies of planetary and stellar astrophysics, laser fusion research, pulsed neutron source etc. To date however, the physics in HED plasma, especially, the energy transport, which is crucial to realize applications, has not been understood well. Intense laser produced plasmas are complex systems involving two widely distinct temperature distributions and are difficult to model by a single approach. Both kinetic and collisional process are equally important to understand an entire process of laser-solid interaction. By implementing atomic physics models, such as collision, ionization, and radiation damping, self consistently, in state-of-the-art particle-in-cell code (PICLS) has enabled to explore the physics involved in the HED plasmas. Laser absorption, hot electron transport, and isochoric heating physics in laser produced hot dense plasmas are studied with a help of PICLS simulations. In particular, a novel mode of electron acceleration, namely DC-ponderomotive acceleration, is identified in the super intense laser regime which plays an important role in the coupling of laser energy to a dense plasma. Geometric effects on hot electron transport and target heating processes are examined in the reduced mass target experiments. Further, pertinent to fast ignition, laser accelerated fast electron divergence and transport in the experiments using warm dense matter (low temperature plasma) is characterized and explained.
Horta, Bernardo Lessa; Schaan, Beatriz D; Bielemann, Renata Moraes; Vianna, Carolina Ávila; Gigante, Denise Petrucci; Barros, Fernando C; Ekelund, Ulf; Hallal, Pedro Curi
2015-11-01
To examine the associations between objectively measured physical activity and sedentary time with pulse wave velocity (PWV) in Brazilian young adults. Cross-sectional analysis with participants of the 1982 Pelotas (Brazil) Birth Cohort who were followed-up from birth to 30 years of age. Overall physical activity (PA) assessed as the average acceleration (mg), time spent in moderate-to-vigorous physical activity (MVPA - min/day) and sedentary time (min/day) were calculated from acceleration data. Carotid-femoral PWV (m/s) was assessed using a portable ultrasound. Systolic and diastolic blood pressure (SBP/DBP), waist circumference (WC) and body mass index (BMI) were analyzed as possible mediators. Multiple linear regression and g-computation formula were used in the analyses. Complete data were available for 1241 individuals. PWV was significantly lower in the two highest quartiles of overall PA (0.26 m/s) compared with the lowest quartile. Participants in the highest quartile of sedentary time had 0.39 m/s higher PWV (95%CI: 0.20; 0.57) than those in the lowest quartile. Individuals achieving ≥30 min/day in MVPA had lower PWV (β = -0.35; 95%CI: -0.56; -0.14). Mutually adjusted analyses between MVPA and sedentary time and PWV changed the coefficients, although results from sedentary time remained more consistent. WC captured 44% of the association between MVPA and PWV. DBP explained 46% of the association between acceleration and PWV. Physical activity was inversely related to PWV in young adults, whereas sedentary time was positively associated. Such associations were only partially mediated by WC and DBP. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Electron-Beam Dynamics for an Advanced Flash-Radiography Accelerator
Ekdahl, Carl
2015-11-17
Beam dynamics issues were assessed for a new linear induction electron accelerator being designed for multipulse flash radiography of large explosively driven hydrodynamic experiments. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Furthermore, beam physics issues were examined through theoretical analysis and computer simulations, including particle-in-cell codes. Beam instabilities investigated included beam breakup, image displacement, diocotron, parametric envelope, ion hose, and themore » resistive wall instability. The beam corkscrew motion and emittance growth from beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos National Laboratory will result if the same engineering standards and construction details are upheld.« less
Electron-beam dynamics for an advanced flash-radiography accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Carl August Jr.
2015-06-22
Beam dynamics issues were assessed for a new linear induction electron accelerator. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Beam physics issues were examined through theoretical analysis and computer simulations, including particle-in cell (PIC) codes. Beam instabilities investigated included beam breakup (BBU), image displacement, diocotron, parametric envelope, ion hose, and the resistive wall instability. Beam corkscrew motion and emittance growth frommore » beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos will result if the same engineering standards and construction details are upheld.« less
The Influence of Accelerator Science on Physics Research
NASA Astrophysics Data System (ADS)
Haussecker, Enzo F.; Chao, Alexander W.
2011-06-01
We evaluate accelerator science in the context of its contributions to the physics community. We address the problem of quantifying these contributions and present a scheme for a numerical evaluation of them. We show by using a statistical sample of important developments in modern physics that accelerator science has influenced 28% of post-1938 physicists and also 28% of post-1938 physics research. We also examine how the influence of accelerator science has evolved over time, and show that on average it has contributed to a physics Nobel Prize-winning research every 2.9 years.
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing
Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.
Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir
The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centerpiece of the US domestic HEP program. Operation, upgrade and development of the accelerators for the near-term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators.
Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy
Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern
2011-01-01
This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688
Modeling and Simulation of Explosively Driven Electromechanical Devices
NASA Astrophysics Data System (ADS)
Demmie, Paul N.
2002-07-01
Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.
Advantages of GPU technology in DFT calculations of intercalated graphene
NASA Astrophysics Data System (ADS)
Pešić, J.; Gajić, R.
2014-09-01
Over the past few years, the expansion of general-purpose graphic-processing unit (GPGPU) technology has had a great impact on computational science. GPGPU is the utilization of a graphics-processing unit (GPU) to perform calculations in applications usually handled by the central processing unit (CPU). Use of GPGPUs as a way to increase computational power in the material sciences has significantly decreased computational costs in already highly demanding calculations. A level of the acceleration and parallelization depends on the problem itself. Some problems can benefit from GPU acceleration and parallelization, such as the finite-difference time-domain algorithm (FTDT) and density-functional theory (DFT), while others cannot take advantage of these modern technologies. A number of GPU-supported applications had emerged in the past several years (www.nvidia.com/object/gpu-applications.html). Quantum Espresso (QE) is reported as an integrated suite of open source computer codes for electronic-structure calculations and materials modeling at the nano-scale. It is based on DFT, the use of a plane-waves basis and a pseudopotential approach. Since the QE 5.0 version, it has been implemented as a plug-in component for standard QE packages that allows exploiting the capabilities of Nvidia GPU graphic cards (www.qe-forge.org/gf/proj). In this study, we have examined the impact of the usage of GPU acceleration and parallelization on the numerical performance of DFT calculations. Graphene has been attracting attention worldwide and has already shown some remarkable properties. We have studied an intercalated graphene, using the QE package PHonon, which employs GPU. The term ‘intercalation’ refers to a process whereby foreign adatoms are inserted onto a graphene lattice. In addition, by intercalating different atoms between graphene layers, it is possible to tune their physical properties. Our experiments have shown there are benefits from using GPUs, and we reached an acceleration of several times compared to standard CPU calculations.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
NASA Astrophysics Data System (ADS)
Aoki, K.; Ohuchi, N.; Zong, Z.; Arimoto, Y.; Wang, X.; Yamaoka, H.; Kawai, M.; Kondou, Y.; Makida, Y.; Hirose, M.; Endou, T.; Iwasaki, M.; Nakamura, T.
2017-12-01
A remote monitoring system was developed based on the software infrastructure of the Experimental Physics and Industrial Control System (EPICS) for the cryogenic system of superconducting magnets in the interaction region of the SuperKEKB accelerator. The SuperKEKB has been constructed to conduct high-energy physics experiments at KEK. These superconducting magnets consist of three apparatuses, the Belle II detector solenoid, and QCSL and QCSR accelerator magnets. They are each contained in three cryostats cooled by dedicated helium cryogenic systems. The monitoring system was developed to read data of the EX-8000, which is an integrated instrumentation system to control all cryogenic components. The monitoring system uses the I/O control tools of EPICS software for TCP/IP, archiving techniques using a relational database, and easy human-computer interface. Using this monitoring system, it is possible to remotely monitor all real-time data of the superconducting magnets and cryogenic systems. It is also convenient to share data among multiple groups.
Fifty years of accelerator based physics at Chalk River
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKay, John W.
1999-04-26
The Chalk River Laboratories of Atomic Energy of Canada Ltd. was a major centre for Accelerator based physics for the last fifty years. As early as 1946, nuclear structure studies were started on Cockroft-Walton accelerators. A series of accelerators followed, including the world's first Tandem, and the MP Tandem, Superconducting Cyclotron (TASCC) facility that was opened in 1986. The nuclear physics program was shut down in 1996. This paper will describe some of the highlights of the accelerators and the research of the laboratory.
NASA Astrophysics Data System (ADS)
Qiang, Ji
2017-10-01
A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.
NASA Astrophysics Data System (ADS)
Subramaniam, Vivek; Underwood, Thomas C.; Raja, Laxminarayan L.; Cappelli, Mark A.
2018-02-01
We present a magnetohydrodynamic (MHD) numerical simulation to study the physical mechanisms underlying plasma acceleration in a coaxial plasma gun. Coaxial plasma accelerators are known to exhibit two distinct modes of operation depending on the delay between gas loading and capacitor discharging. Shorter delays lead to a high velocity plasma deflagration jet and longer delays produce detonation shocks. During a single operational cycle that typically consists of two discharge events, the plasma acceleration exhibits a behavior characterized by a mode transition from deflagration to detonation. The first of the discharge events, a deflagration that occurs when the discharge expands into an initially evacuated domain, requires a modification of the standard MHD algorithm to account for rarefied regions of the simulation domain. The conventional approach of using a low background density gas to mimic the vacuum background results in the formation of an artificial shock, inconsistent with the physics of free expansion. To this end, we present a plasma-vacuum interface tracking framework with the objective of predicting a physically consistent free expansion, devoid of the spurious shock obtained with the low background density approach. The interface tracking formulation is integrated within the MHD framework to simulate the plasma deflagration and the second discharge event, a plasma detonation, formed due to its initiation in a background prefilled with gas remnant from the deflagration. The mode transition behavior obtained in the simulations is qualitatively compared to that observed in the experiments using high framing rate Schlieren videography. The deflagration mode is further investigated to understand the jet formation process and the axial velocities obtained are compared against experimentally obtained deflagration plasma front velocities. The simulations are also used to provide insight into the conditions responsible for the generation and sustenance of the magnetic pinch. The pinch width and number density distribution are compared to experimentally obtained data to calibrate the inlet boundary conditions used to set up the plasma acceleration problem.
NASA Astrophysics Data System (ADS)
McClure, J. E.; Prins, J. F.; Miller, C. T.
2014-07-01
Multiphase flow implementations of the lattice Boltzmann method (LBM) are widely applied to the study of porous medium systems. In this work, we construct a new variant of the popular "color" LBM for two-phase flow in which a three-dimensional, 19-velocity (D3Q19) lattice is used to compute the momentum transport solution while a three-dimensional, seven velocity (D3Q7) lattice is used to compute the mass transport solution. Based on this formulation, we implement a novel heterogeneous GPU-accelerated algorithm in which the mass transport solution is computed by multiple shared memory CPU cores programmed using OpenMP while a concurrent solution of the momentum transport is performed using a GPU. The heterogeneous solution is demonstrated to provide speedup of 2.6 × as compared to multi-core CPU solution and 1.8 × compared to GPU solution due to concurrent utilization of both CPU and GPU bandwidths. Furthermore, we verify that the proposed formulation provides an accurate physical representation of multiphase flow processes and demonstrate that the approach can be applied to perform heterogeneous simulations of two-phase flow in porous media using a typical GPU-accelerated workstation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, Andreu; Badano, Aldo
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less
Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*
Hardy, David J.; Stone, John E.; Schulten, Klaus
2009-01-01
Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132
Chabiniok, Radomir; Wang, Vicky Y; Hadjicharalambous, Myrianthi; Asner, Liya; Lee, Jack; Sermesant, Maxime; Kuhl, Ellen; Young, Alistair A; Moireau, Philippe; Nash, Martyn P; Chapelle, Dominique; Nordsletten, David A
2016-04-06
With heart and cardiovascular diseases continually challenging healthcare systems worldwide, translating basic research on cardiac (patho)physiology into clinical care is essential. Exacerbating this already extensive challenge is the complexity of the heart, relying on its hierarchical structure and function to maintain cardiovascular flow. Computational modelling has been proposed and actively pursued as a tool for accelerating research and translation. Allowing exploration of the relationships between physics, multiscale mechanisms and function, computational modelling provides a platform for improving our understanding of the heart. Further integration of experimental and clinical data through data assimilation and parameter estimation techniques is bringing computational models closer to use in routine clinical practice. This article reviews developments in computational cardiac modelling and how their integration with medical imaging data is providing new pathways for translational cardiac modelling.
Assessment of computational issues associated with analysis of high-lift systems
NASA Technical Reports Server (NTRS)
Balasubramanian, R.; Jones, Kenneth M.; Waggoner, Edgar G.
1992-01-01
Thin-layer Navier-Stokes calculations for wing-fuselage configurations from subsonic to hypersonic flow regimes are now possible. However, efficient, accurate solutions for using these codes for two- and three-dimensional high-lift systems have yet to be realized. A brief overview of salient experimental and computational research is presented. An assessment of the state-of-the-art relative to high-lift system analysis and identification of issues related to grid generation and flow physics which are crucial for computational success in this area are also provided. Research in support of the high-lift elements of NASA's High Speed Research and Advanced Subsonic Transport Programs which addresses some of the computational issues is presented. Finally, fruitful areas of concentrated research are identified to accelerate overall progress for high lift system analysis and design.
Analytical study of laser-supported combustion waves in hydrogen
NASA Technical Reports Server (NTRS)
Kemp, N. H.; Root, R. G.
1978-01-01
Laser supported combustion (LSC) waves are an important ingredient in the fluid mechanics of CW laser propulsion using a hydrogen propellant and 10.6 micron lasers. Therefore, a computer model has been constructed to solve the one-dimensional energy equation with constant pressure and area. Physical processes considered include convection, conduction, absorption of laser energy, radiation energy loss, and accurate properties of equilibrium hydrogen. Calculations for 1, 3, 10 and 30 atm were made for intensities of 10 to the 4th to 10 to the 6th W/sq cm, which gave temperature profiles, wave speed, etc. To pursue the propulsion application, a second computer model was developed to describe the acceleration of the gas emerging from the LSC wave into a variable-pressure, converging streamtube, still including all the above-mentioned physical processes. The results show very high temperatures in LSC waves which absorb all the laser energy, and high radiative losses.
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
Accelerated Application Development: The ORNL Titan Experience
Joubert, Wayne; Archibald, Richard K.; Berrill, Mark A.; ...
2015-05-09
The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less
Accelerated application development: The ORNL Titan experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Archibald, Rick; Berrill, Mark
2015-08-01
The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less
On some variational acceleration techniques and related methods for local refinement
NASA Astrophysics Data System (ADS)
Teigland, Rune
1998-10-01
This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
NASA Astrophysics Data System (ADS)
Pashaei, Ali; Piella, Gemma; Planes, Xavier; Duchateau, Nicolas; de Caralt, Teresa M.; Sitges, Marta; Frangi, Alejandro F.
2013-03-01
It has been demonstrated that the acceleration signal has potential to monitor heart function and adaptively optimize Cardiac Resynchronization Therapy (CRT) systems. In this paper, we propose a non-invasive method for computing myocardial acceleration from 3D echocardiographic sequences. Displacement of the myocardium was estimated using a two-step approach: (1) 3D automatic segmentation of the myocardium at end-diastole using 3D Active Shape Models (ASM); (2) propagation of this segmentation along the sequence using non-rigid 3D+t image registration (temporal di eomorphic free-form-deformation, TDFFD). Acceleration was obtained locally at each point of the myocardium from local displacement. The framework has been tested on images from a realistic physical heart phantom (DHP-01, Shelley Medical Imaging Technologies, London, ON, CA) in which the displacement of some control regions was known. Good correlation has been demonstrated between the estimated displacement function from the algorithms and the phantom setup. Due to the limited temporal resolution, the acceleration signals are sparse and highly noisy. The study suggests a non-invasive technique to measure the cardiac acceleration that may be used to improve the monitoring of cardiac mechanics and optimization of CRT.
TIME-DEPENDENT ELECTRON ACCELERATION IN BLAZAR TRANSIENTS: X-RAY TIME LAGS AND SPECTRAL FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Tiffany R.; Becker, Peter A.; Finke, Justin D., E-mail: pbecker@gmu.edu, E-mail: tlewis13@gmu.edu, E-mail: justin.finke@nrl.navy.mil
2016-06-20
Electromagnetic radiation from blazar jets often displays strong variability, extending from radio to γ -ray frequencies. In a few cases, this variability has been characterized using Fourier time lags, such as those detected in the X-rays from Mrk 421 using Beppo SAX. The lack of a theoretical framework to interpret the data has motivated us to develop a new model for the formation of the X-ray spectrum and the time lags in blazar jets based on a transport equation including terms describing stochastic Fermi acceleration, synchrotron losses, shock acceleration, adiabatic expansion, and spatial diffusion. We derive the exact solution formore » the Fourier transform of the electron distribution and use it to compute the Fourier transform of the synchrotron radiation spectrum and the associated X-ray time lags. The same theoretical framework is also used to compute the peak flare X-ray spectrum, assuming that a steady-state electron distribution is achieved during the peak of the flare. The model parameters are constrained by comparing the theoretical predictions with the observational data for Mrk 421. The resulting integrated model yields, for the first time, a complete first-principles physical explanation for both the formation of the observed time lags and the shape of the peak flare X-ray spectrum. It also yields direct estimates of the strength of the shock and the stochastic magnetohydrodynamical wave acceleration components in the Mrk 421 jet.« less
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2015-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
"SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres
NASA Astrophysics Data System (ADS)
Sapar, A.; Poolamäe, R.
2003-01-01
A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.
Brookhaven highlights for fiscal year 1991, October 1, 1990--September 30, 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, M.S.; Cohen, A.; Greenberg, D.
1991-12-31
This report highlights Brookhaven National Laboratory`s activities for fiscal year 1991. Topics from the four research divisions: Computing and Communications, Instrumentation, Reactors, and Safety and Environmental Protection are presented. The research programs at Brookhaven are diverse, as is reflected by the nine different scientific departments: Accelerator Development, Alternating Gradient Synchrotron, Applied Science, Biology, Chemistry, Medical, National Synchrotron Light Source, Nuclear Energy, and Physics. Administrative and managerial information about Brookhaven are also disclosed. (GHH)
Brookhaven highlights for fiscal year 1991, October 1, 1990--September 30, 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, M.S.; Cohen, A.; Greenberg, D.
1991-01-01
This report highlights Brookhaven National Laboratory's activities for fiscal year 1991. Topics from the four research divisions: Computing and Communications, Instrumentation, Reactors, and Safety and Environmental Protection are presented. The research programs at Brookhaven are diverse, as is reflected by the nine different scientific departments: Accelerator Development, Alternating Gradient Synchrotron, Applied Science, Biology, Chemistry, Medical, National Synchrotron Light Source, Nuclear Energy, and Physics. Administrative and managerial information about Brookhaven are also disclosed. (GHH)
Laboratory directed research and development program FY 1997
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-03-01
This report compiles the annual reports of Laboratory Directed Research and Development projects supported by the Berkeley Lab. Projects are arranged under the following topical sections: (1) Accelerator and fusion research division; (2) Chemical sciences division; (3) Computing Sciences; (4) Earth sciences division; (5) Environmental energy technologies division; (6) life sciences division; (7) Materials sciences division; (8) Nuclear science division; (9) Physics division; (10) Structural biology division; and (11) Cross-divisional. A total of 66 projects are summarized.
Challenges and Opportunities in Propulsion Simulations
2015-09-24
leverage Nvidia GPU accelerators • Release common computational infrastructure as Distro A for collaboration • Add physics modules as either...Gemini (6.4 GB/s) Dual Rail EDR-IB (23 GB/s) Interconnect Topology 3D Torus Non-blocking Fat Tree Processors AMD Opteron™ NVIDIA Kepler™ IBM...POWER9 NVIDIA Volta™ File System 32 PB, 1 TB/s, Lustre® 120 PB, 1 TB/s, GPFS™ Peak power consumption 9 MW 10 MW Titan vs. Summit Source: R
Application of Plasma Waveguides to High Energy Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milchberg, Howard M
2013-03-30
The eventual success of laser-plasma based acceleration schemes for high-energy particle physics will require the focusing and stable guiding of short intense laser pulses in reproducible plasma channels. For this goal to be realized, many scientific issues need to be addressed. These issues include an understanding of the basic physics of, and an exploration of various schemes for, plasma channel formation. In addition, the coupling of intense laser pulses to these channels and the stable propagation of pulses in the channels require study. Finally, new theoretical and computational tools need to be developed to aid in the design and analysismore » of experiments and future accelerators. Here we propose a 3-year renewal of our combined theoretical and experimental program on the applications of plasma waveguides to high-energy accelerators. During the past grant period we have made a number of significant advances in the science of laser-plasma based acceleration. We pioneered the development of clustered gases as a new highly efficient medium for plasma channel formation. Our contributions here include theoretical and experimental studies of the physics of cluster ionization, heating, explosion, and channel formation. We have demonstrated for the first time the generation of and guiding in a corrugated plasma waveguide. The fine structure demonstrated in these guides is only possible with cluster jet heating by lasers. The corrugated guide is a slow wave structure operable at arbitrarily high laser intensities, allowing direct laser acceleration, a process we have explored in detail with simulations. The development of these guides opens the possibility of direct laser acceleration, a true miniature analogue of the SLAC RF-based accelerator. Our theoretical studies during this period have also contributed to the further development of the simulation codes, Wake and QuickPIC, which can be used for both laser driven and beam driven plasma based acceleration schemes. We will continue our development of advanced simulation tools by modifying the QuickPIC algorithm to allow for the simulation of plasma particle pick-up by the wake fields. We have also performed extensive simulations of plasma slow wave structures for efficient THz generation by guided laser beams or accelerated electron beams. We will pursue experimental studies of direct laser acceleration, and THz generation by two methods, ponderomotive-induced THz polarization, and THz radiation by laser accelerated electron beams. We also plan to study both conventional and corrugated plasma channels using our new 30 TW in our new lab facilities. We will investigate production of very long hydrogen plasma waveguides (5 cm). We will study guiding at increasing power levels through the onset of laser-induced cavitation (bubble regime) to assess the role played by the preformed channel. Experiments in direct acceleration will be performed, using laser plasma wakefields as the electron injector. Finally, we will use 2-colour ionization of gases as a high frequency THz source (<60 THz) in order for femtosecond measurements of low plasma densities in waveguides and beams.« less
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
The GeantV project: Preparing the future of simulation
Amadio, G.; J. Apostolakis; Bandieramonte, M.; ...
2015-12-23
Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements. Since the increase in computing resources is not likely to keep pace with our needs, it is therefore necessary to explore innovative ways of speeding up simulation in order to sustain the progress of High Energymore » Physics. The GeantV project aims at developing a high performance detector simulation system integrating fast and full simulation that can be ported on different computing architectures, including CPU accelerators. After more than two years of R&D the project has produced a prototype capable of transporting particles in complex geometries exploiting micro-parallelism, SIMD and multithreading. Portability is obtained via C++ template techniques that allow the development of machine- independent computational kernels. Furthermore, a set of tables derived from Geant4 for cross sections and final states provides a realistic shower development and, having been ported into a Geant4 physics list, can be used as a basis for a direct performance comparison.« less
Physical activities to enhance an understanding of acceleration
NASA Astrophysics Data System (ADS)
Lee, S. A.
2006-03-01
On the basis of their everyday experiences, students have developed an understanding of many of the concepts of mechanics by the time they take their first physics course. However, an accurate understanding of acceleration remains elusive. Many students have difficulties distinguishing between velocity and acceleration. In this report, a set of physical activities to highlight the differences between acceleration and velocity are described. These activities involve running and walking on sand (such as an outdoor volleyball court).
Controlling Flexible Robot Arms Using High Speed Dynamics Process
NASA Technical Reports Server (NTRS)
Jain, Abhinandan (Inventor)
1996-01-01
A robot manipulator controller for a flexible manipulator arm having plural bodies connected at respective movable hinges and flexible in plural deformation modes corresponding to respective modal spatial influence vectors relating deformations of plural spaced nodes of respective bodies to the plural deformation modes, operates by computing articulated body quantities for each of the bodies from respective modal spatial influence vectors, obtaining specified body forces for each of the bodies, and computing modal deformation accelerations of the nodes and hinge accelerations of the hinges from the specified body forces, from the articulated body quantities and from the modal spatial influence vectors. In one embodiment of the invention, the controller further operates by comparing the accelerations thus computed to desired manipulator motion to determine a motion discrepancy, and correcting the specified body forces so as to reduce the motion discrepancy. The manipulator bodies and hinges are characterized by respective vectors of deformation and hinge configuration variables, and computing modal deformation accelerations and hinge accelerations is carried out for each one of the bodies beginning with the outermost body by computing a residual body force from a residual body force of a previous body and from the vector of deformation and hinge configuration variables, computing a resultant hinge acceleration from the body force, the residual body force and the articulated hinge inertia, and revising the residual body force modal body acceleration.
Advanced computations in plasma physics
NASA Astrophysics Data System (ADS)
Tang, W. M.
2002-05-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Discovering chemistry with an ab initio nanoreactor
Wang, Lee-Ping; Titov, Alexey; McGibbon, Robert; ...
2014-11-02
Chemical understanding is driven by the experimental discovery of new compounds and reactivity, and is supported by theory and computation that provides detailed physical insight. While theoretical and computational studies have generally focused on specific processes or mechanistic hypotheses, recent methodological and computational advances harken the advent of their principal role in discovery. Here we report the development and application of the ab initio nanoreactor – a highly accelerated, first-principles molecular dynamics simulation of chemical reactions that discovers new molecules and mechanisms without preordained reaction coordinates or elementary steps. Using the nanoreactor we show new pathways for glycine synthesis frommore » primitive compounds proposed to exist on the early Earth, providing new insight into the classic Urey-Miller experiment. Ultimately, these results highlight the emergence of theoretical and computational chemistry as a tool for discovery in addition to its traditional role of interpreting experimental findings.« less
Discovering chemistry with an ab initio nanoreactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lee-Ping; Titov, Alexey; McGibbon, Robert
Chemical understanding is driven by the experimental discovery of new compounds and reactivity, and is supported by theory and computation that provides detailed physical insight. While theoretical and computational studies have generally focused on specific processes or mechanistic hypotheses, recent methodological and computational advances harken the advent of their principal role in discovery. Here we report the development and application of the ab initio nanoreactor – a highly accelerated, first-principles molecular dynamics simulation of chemical reactions that discovers new molecules and mechanisms without preordained reaction coordinates or elementary steps. Using the nanoreactor we show new pathways for glycine synthesis frommore » primitive compounds proposed to exist on the early Earth, providing new insight into the classic Urey-Miller experiment. Ultimately, these results highlight the emergence of theoretical and computational chemistry as a tool for discovery in addition to its traditional role of interpreting experimental findings.« less
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao; Sternberg, Philip; Maris, Pieter
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI codemore » MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.« less
Future Directions in Medical Physics: Models, Technology, and Translation to Medicine
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey
The application of physics in medicine has been integral to major advances in diagnostic and therapeutic medicine. Two primary areas represent the mainstay of medical physics research in the last century: in radiation therapy, physicists have propelled advances in conformal radiation treatment and high-precision image guidance; and in diagnostic imaging, physicists have advanced an arsenal of multi-modality imaging that includes CT, MRI, ultrasound, and PET as indispensible tools for noninvasive screening, diagnosis, and assessment of treatment response. In addition to their role in building such technologically rich fields of medicine, physicists have also become integral to daily clinical practice in these areas. The future suggests new opportunities for multi-disciplinary research bridging physics, biology, engineering, and computer science, and collaboration in medical physics carries a strong capacity for identification of significant clinical needs, access to clinical data, and translation of technologies to clinical studies. In radiation therapy, for example, the extraction of knowledge from large datasets on treatment delivery, image-based phenotypes, genomic profile, and treatment outcome will require innovation in computational modeling and connection with medical physics for the curation of large datasets. Similarly in imaging physics, the demand for new imaging technology capable of measuring physical and biological processes over orders of magnitude in scale (from molecules to whole organ systems) and exploiting new contrast mechanisms for greater sensitivity to molecular agents and subtle functional / morphological change will benefit from multi-disciplinary collaboration in physics, biology, and engineering. Also in surgery and interventional radiology, where needs for increased precision and patient safety meet constraints in cost and workflow, development of new technologies for imaging, image registration, and robotic assistance can leverage collaboration in physics, biomedical engineering, and computer science. In each area, there is major opportunity for multi-disciplinary collaboration with medical physics to accelerate the translation of such technologies to clinical use. Research supported by the National Institutes of Health, Siemens Healthcare, and Carestream Health.
Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
2003-01-01
The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.
Influence of Ionization and Beam Quality on Interaction of TW-Peak CO2 Laser with Hydrogen Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman
3D numerical simulations of the interaction of a powerful CO2 laser with hydrogen jets demonstrating the role of ionization and laser beam quality are presented. Simulations are performed in support of the plasma wakefield accelerator experiments being conducted at the BNL Accelerator Test Facility (ATF). The CO2 laser at BNL ATF has several potential advantages for laser wakefield acceleration compared to widely used solid-state lasers. SPACE, a parallel relativistic Particle-in-Cell code, developed at SBU and BNL, has been used in these studies. A novelty of the code is its set of efficient atomic physics algorithms that compute ionization and recombinationmore » rates on the grid and transfer them to particles. The primary goal of the initial BNL experiments was to characterize the plasma density by measuring the sidebands in the spectrum of the probe laser. Simulations, that resolve hydrogen ionization and laser spectra, help explain several trends that were observed in the experiments.« less
Faraday's Law, Lenz's Law, and Conservation of Energy
NASA Astrophysics Data System (ADS)
Wood, Lowell; Rottmann, Ray; Barrera, Regina
2003-03-01
A magnet accelerates upward through a coil and generates an emf that is recorded by a data acquisition system and a computer. Simultaneously, the position of the magnet as a function of time is recorded using a photogate/pulley system. When the circuit is completed by adding an appropriate load resistor, a current that opposes the flux change is generated in the coil. This current causes a magnetic field in the coil that decreases the acceleration of the rising magnet, a fact that is evident from the position versus time data. The energy dissipated by the resistance in the circuit is shown experimentally to equal the loss in mechanical energy of the system to within a few percent, thus demonstrating conservation of energy. The graphs of speed squared versus displacement show the changes in acceleration produced by the interaction of the induced current and the magnet. Students in introductory physics laboratories have successfully performed this experiment and are able to see many relevant features of Faraday's law.
An Experimental Study of a Pulsed Electromagnetic Plasma Accelerator
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Eskridge, Richard; Lee, Mike; Smith, James; Martin, Adam; Markusic, Tom E.; Cassibry, Jason T.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Experiments are being performed on the NASA Marshall Space Flight Center (MSFC) pulsed electromagnetic plasma accelerator (PEPA-0). Data produced from the experiments provide an opportunity to further understand the plasma dynamics in these thrusters via detailed computational modeling. The detailed and accurate understanding of the plasma dynamics in these devices holds the key towards extending their capabilities in a number of applications, including their applications as high power (greater than 1 MW) thrusters, and their use for producing high-velocity, uniform plasma jets for experimental purposes. For this study, the 2-D MHD modeling code, MACH2, is used to provide detailed interpretation of the experimental data. At the same time, a 0-D physics model of the plasma initial phase is developed to guide our 2-D modeling studies.
High-performance dynamic quantum clustering on graphics processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittek, Peter, E-mail: peterwittek@acm.org
2013-01-15
Clustering methods in machine learning may benefit from borrowing metaphors from physics. Dynamic quantum clustering associates a Gaussian wave packet with the multidimensional data points and regards them as eigenfunctions of the Schroedinger equation. The clustering structure emerges by letting the system evolve and the visual nature of the algorithm has been shown to be useful in a range of applications. Furthermore, the method only uses matrix operations, which readily lend themselves to parallelization. In this paper, we develop an implementation on graphics hardware and investigate how this approach can accelerate the computations. We achieve a speedup of up tomore » two magnitudes over a multicore CPU implementation, which proves that quantum-like methods and acceleration by graphics processing units have a great relevance to machine learning.« less
On a thermal analysis of a second stripper for rare isotope accelerator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Momozaki, Y.; Nolen, J.; Nuclear Engineering Division
2008-08-04
This memo summarizes simple calculations and results of the thermal analysis on the second stripper to be used in the driver linac of Rare Isotope Accelerator (RIA). Both liquid (Sodium) and solid (Titanium and Vanadium) stripper concepts were considered. These calculations were intended to provide basic information to evaluate the feasibility of liquid (thick film) and solid (rotating wheel) second strippers. Nuclear physics calculations to estimate the volumetric heat generation in the stripper material were performed by 'LISE for Excel'. In the thermal calculations, the strippers were modeled as a thin 2D plate with uniform heat generation within the beammore » spot. Then, temperature distributions were computed by assuming that the heat spreads conductively in the plate in radial direction without radiative heat losses to surroundings.« less
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
NASA Astrophysics Data System (ADS)
Willert, Jeffrey; Park, H.; Knoll, D. A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
Longitudinal bunch monitoring at the Fermilab Tevatron and Main Injector synchrotrons
Thurman-Keup, R.; Bhat, C.; Blokland, W.; ...
2011-10-17
The measurement of the longitudinal behavior of the accelerated particle beams at Fermilab is crucial to the optimization and control of the beam and the maximizing of the integrated luminosity for the particle physics experiments. Longitudinal measurements in the Tevatron and Main Injector synchrotrons are based on the analysis of signals from resistive wall current monitors. This study describes the signal processing performed by a 2 GHz-bandwidth oscilloscope together with a computer running a LabVIEW program which calculates the longitudinal beam parameters.
Fast solar radiation pressure modelling with ray tracing and multiple reflections
NASA Astrophysics Data System (ADS)
Li, Zhen; Ziebart, Marek; Bhattarai, Santosh; Harrison, David; Grey, Stuart
2018-05-01
Physics based SRP (Solar Radiation Pressure) models using ray tracing methods are powerful tools when modelling the forces on complex real world space vehicles. Currently high resolution (1 mm) ray tracing with secondary intersections is done on high performance computers at UCL (University College London). This study introduces the BVH (Bounding Volume Hierarchy) into the ray tracing approach for physics based SRP modelling and makes it possible to run high resolution analysis on personal computers. The ray tracer is both general and efficient enough to cope with the complex shape of satellites and multiple reflections (three or more, with no upper limit). In this study, the traditional ray tracing technique is introduced in the first place and then the BVH is integrated into the ray tracing. Four aspects of the ray tracer were tested for investigating the performance including runtime, accuracy, the effects of multiple reflections and the effects of pixel array resolution.Test results in runtime on GPS IIR and Galileo IOV (In Orbit Validation) satellites show that the BVH can make the force model computation 30-50 times faster. The ray tracer has an absolute accuracy of several nanonewtons by comparing the test results for spheres and planes with the analytical computations. The multiple reflection effects are investigated both in the intersection number and acceleration on GPS IIR, Galileo IOV and Sentinel-1 spacecraft. Considering the number of intersections, the 3rd reflection can capture 99.12 %, 99.14 % , and 91.34 % of the total reflections for GPS IIR, Galileo IOV satellite bus and the Sentinel-1 spacecraft respectively. In terms of the multiple reflection effects on the acceleration, the secondary reflection effect for Galileo IOV satellite and Sentinel-1 can reach 0.2 nm /s2 and 0.4 nm /s2 respectively. The error percentage in the accelerations magnitude results show that the 3rd reflection should be considered in order to make it less than 0.035 % . The pixel array resolution tests show that the dimensions of the components have to be considered when choosing the spacing of the pixel in order not to miss some components of the satellite in ray tracing. This paper presents the first systematic and quantitative study of the secondary and higher order intersection effects. It shows conclusively the effect is non-negligible for certain classes of misson.
Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2015-10-01
The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.
G-cueing microcontroller (a microprocessor application in simulators)
NASA Technical Reports Server (NTRS)
Horattas, C. G.
1980-01-01
A g cueing microcontroller is described which consists of a tandem pair of microprocessors, dedicated to the task of simulating pilot sensed cues caused by gravity effects. This task includes execution of a g cueing model which drives actuators that alter the configuration of the pilot's seat. The g cueing microcontroller receives acceleration commands from the aerodynamics model in the main computer and creates the stimuli that produce physical acceleration effects of the aircraft seat on the pilots anatomy. One of the two microprocessors is a fixed instruction processor that performs all control and interface functions. The other, a specially designed bipolar bit slice microprocessor, is a microprogrammable processor dedicated to all arithmetic operations. The two processors communicate with each other by a shared memory. The g cueing microcontroller contains its own dedicated I/O conversion modules for interface with the seat actuators and controls, and a DMA controller for interfacing with the simulation computer. Any application which can be microcoded within the available memory, the available real time and the available I/O channels, could be implemented in the same controller.
Accelerating the design of solar thermal fuel materials through high throughput simulations.
Liu, Yun; Grossman, Jeffrey C
2014-12-10
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Simon van der Meer (1925-2011):. A Modest Genius of Accelerator Science
NASA Astrophysics Data System (ADS)
Chohan, Vinod C.
2011-02-01
Simon van der Meer was a brilliant scientist and a true giant of accelerator science. His seminal contributions to accelerator science have been essential to this day in our quest for satisfying the demands of modern particle physics. Whether we talk of long base-line neutrino physics or antiproton-proton physics at Fermilab or proton-proton physics at LHC, his techniques and inventions have been a vital part of the modern day successes. Simon van der Meer and Carlo Rubbia were the first CERN scientists to become Nobel laureates in Physics, in 1984. Van der Meer's lesserknown contributions spanned a whole range of subjects in accelerator science, from magnet design to power supply design, beam measurements, slow beam extraction, sophisticated programs and controls.
Proceedings of the 1982 DPF summer study on elementary particle physics and future facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donaldson, R.; Gustafson, R.; Paige, F.
1982-01-01
This book presents the papers given at a conference on high energy physics. Topics considered at the conference included synchrotron radiation, testing the standard model, beyond the standard model, exploring the limits of accelerator technology, novel detector ideas, lepton-lepton colliders, lepton-hadron colliders, hadron-hadron colliders, fixed-target accelerators, non-accelerator physics, and sociology.
Physical activity classification using the GENEA wrist-worn accelerometer.
Zhang, Shaoyan; Rowlands, Alex V; Murray, Peter; Hurst, Tina L
2012-04-01
Most accelerometer-based activity monitors are worn on the waist or lower back for assessment of habitual physical activity. Output is in arbitrary counts that can be classified by activity intensity according to published thresholds. The purpose of this study was to develop methods to classify physical activities into walking, running, household, or sedentary activities based on raw acceleration data from the GENEA (Gravity Estimator of Normal Everyday Activity) and compare classification accuracy from a wrist-worn GENEA with a waist-worn GENEA. Sixty participants (age = 49.4 ± 6.5 yr, body mass index = 24.6 ± 3.4 kg·m⁻²) completed an ordered series of 10-12 semistructured activities in the laboratory and outdoor environment. Throughout, three GENEA accelerometers were worn: one at the waist, one on the left wrist, and one on the right wrist. Acceleration data were collected at 80 Hz. Features obtained from both fast Fourier transform and wavelet decomposition were extracted, and machine learning algorithms were used to classify four types of daily activities including sedentary, household, walking, and running activities. The computational results demonstrated that the algorithm we developed can accurately classify certain types of daily activities, with high overall classification accuracy for both waist-worn GENEA (0.99) and wrist-worn GENEA (right wrist = 0.97, left wrist = 0.96). We have successfully developed algorithms suitable for use with wrist-worn accelerometers for detecting certain types of physical activities; the performance is comparable to waist-worn accelerometers for assessment of physical activity.
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
The history and future of accelerator radiological protection.
Thomas, R H
2001-01-01
The development of accelerator radiological protection from the mid-1930s, just after the invention of the cyclotron, to the present day is described. Three major themes--physics, personalities and politics--are developed. In the sections describing physics the development of shielding design though measurement, radiation transport calculations, the impact of accelerators on the environment and dosimetry in accelerator radiation fields are described. The discussion is limited to high-energy, high-intensity electron and proton accelerators. The impact of notable personalities on the development of both the basic science and on the accelerator health physics profession itself is described. The important role played by scholars and teachers is discussed. In the final section. which discusses the future of accelerator radiological protection, some emphasis is given to the social and political aspects that must he faced in the years ahead.
NASA Astrophysics Data System (ADS)
Dalichaouch, Thamine; Davidson, Asher; Xu, Xinlu; Yu, Peicheng; Tsung, Frank; Mori, Warren; Li, Fei; Zhang, Chaojie; Lu, Wei; Vieira, Jorge; Fonseca, Ricardo
2016-10-01
In the past few decades, there has been much progress in theory, simulation, and experiment towards using Laser wakefield acceleration (LWFA) as the basis for designing and building compact x-ray free-electron-lasers (XFEL) as well as a next generation linear collider. Recently, ionization injection and density downramp injection have been proposed and demonstrated as a controllable injection scheme for creating higher quality and ultra-bright relativistic electron beams using LWFA. However, full-3D simulations of plasma-based accelerators are computationally intensive, sometimes taking 100 millions of core-hours on today's computers. A more efficient quasi-3D algorithm was developed and implemented into OSIRIS using a particle-in-cell description with a charge conserving current deposition scheme in r - z and a gridless Fourier expansion in ϕ. Due to the azimuthal symmetry in LWFA, quasi-3D simulations are computationally more efficient than 3D cartesian simulations since only the first few harmonics in are needed ϕ to capture the 3D physics of LWFA. Using the quasi-3D approach, we present preliminary results of ionization and down ramp triggered injection and compare the results against 3D LWFA simulations. This work was supported by DOE and NSF.
ERIC Educational Resources Information Center
Paisley, William; Butler, Matilda
This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…
Analysis and control of high-speed wheeled vehicles
NASA Astrophysics Data System (ADS)
Velenis, Efstathios
In this work we reproduce driving techniques to mimic expert race drivers and obtain the open-loop control signals that may be used by auto-pilot agents driving autonomous ground wheeled vehicles. Race drivers operate their vehicles at the limits of the acceleration envelope. An accurate characterization of the acceleration capacity of the vehicle is required. Understanding and reproduction of such complex maneuvers also require a physics-based mathematical description of the vehicle dynamics. While most of the modeling issues of ground-vehicles/automobiles are already well established in the literature, lack of understanding of the physics associated with friction generation results in ad-hoc approaches to tire friction modeling. In this work we revisit this aspect of the overall vehicle modeling and develop a tire friction model that provides physical interpretation of the tire forces. The new model is free of those singularities at low vehicle speed and wheel angular rate that are inherent in the widely used empirical static models. In addition, the dynamic nature of the tire model proposed herein allows the study of dynamic effects such as transients and hysteresis. The trajectory-planning problem for an autonomous ground wheeled vehicle is formulated in an optimal control framework aiming to minimize the time of travel and maximize the use of the available acceleration capacity. The first approach to solve the optimal control problem is using numerical techniques. Numerical optimization allows incorporation of a vehicle model of high fidelity and generates realistic solutions. Such an optimization scheme provides an ideal platform to study the limit operation of the vehicle, which would not be possible via straightforward simulation. In this work we emphasize the importance of online applicability of the proposed methodologies. This underlines the need for optimal solutions that require little computational cost and are able to incorporate real, unpredictable environments. A semi-analytic methodology is developed to generate the optimal velocity profile for minimum time travel along a prescribed path. The semi-analytic nature ensures minimal computational cost while a receding horizon implementation allows application of the methodology in uncertain environments. Extensions to increase fidelity of the vehicle model are finally provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokosawa, A.
Spin physics activities at medium and high energies became significantly active when polarized targets and polarized beams became accessible for hadron-hadron scattering experiments. My overview of spin physics will be inclined to the study of strong interaction using facilities at Argonne ZGS, Brookhaven AGS (including RHIC), CERN, Fermilab, LAMPF, an SATURNE. In 1960 accelerator physicists had already been convinced that the ZGS could be unique in accelerating a polarized beam; polarized beams were being accelerated through linear accelerators elsewhere at that time. However, there was much concern about going ahead with the construction of a polarized beam because (i) themore » source intensity was not high enough to accelerate in the accelerator, (ii) the use of the accelerator would be limited to only polarized-beam physics, that is, proton-proton interaction, and (iii) p-p elastic scattering was not the most popular topic in high-energy physics. In fact, within spin physics, [pi]-nucleon physics looked attractive, since the determination of spin and parity of possible [pi]p resonances attracted much attention. To proceed we needed more data beside total cross sections and elastic differential cross sections; measurements of polarization and other parameters were urgently needed. Polarization measurements had traditionally been performed by analyzing the spin of recoil protons. The drawbacks of this technique are: (i) it involves double scattering, resulting in poor accuracy of the data, and (ii) a carbon analyzer can only be used for a limited region of energy.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...
2017-10-25
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Parallel computing in experimental mechanics and optical measurement: A review (II)
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Kemao, Qian
2018-05-01
With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.
Petascale supercomputing to accelerate the design of high-temperature alloys
NASA Astrophysics Data System (ADS)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen
2017-12-01
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.
NASA Astrophysics Data System (ADS)
Khankhasayev, Zhanat B.; Kurmanov, Hans; Plendl, Mikhail Kh.
1996-12-01
The Table of Contents for the full book PDF is as follows: * Preface * I. Review of Current Status of Nuclear Transmutation Projects * Accelerator-Driven Systems — Survey of the Research Programs in the World * The Los Alamos Accelerator-Driven Transmutation of Nuclear Waste Concept * Nuclear Waste Transmutation Program in the Czech Republic * Tentative Results of the ISTC Supported Study of the ADTT Plutonium Disposition * Recent Neutron Physics Investigations for the Back End of the Nuclear Fuel Cycle * Optimisation of Accelerator Systems for Transmutation of Nuclear Waste * Proton Linac of the Moscow Meson Factory for the ADTT Experiments * II. Computer Modeling of Nuclear Waste Transmutation Methods and Systems * Transmutation of Minor Actinides in Different Nuclear Facilities * Monte Carlo Modeling of Electro-nuclear Processes with Nonlinear Effects * Simulation of Hybrid Systems with a GEANT Based Program * Computer Study of 90Sr and 137Cs Transmutation by Proton Beam * Methods and Computer Codes for Burn-Up and Fast Transients Calculations in Subcritical Systems with External Sources * New Model of Calculation of Fission Product Yields for the ADTT Problem * Monte Carlo Simulation of Accelerator-Reactor Systems * III. Data Basis for Transmutation of Actinides and Fission Products * Nuclear Data in the Accelerator Driven Transmutation Problem * Nuclear Data to Study Radiation Damage, Activation, and Transmutation of Materials Irradiated by Particles of Intermediate and High Energies * Radium Institute Investigations on the Intermediate Energy Nuclear Data on Hybrid Nuclear Technologies * Nuclear Data Requirements in Intermediate Energy Range for Improvement of Calculations of ADTT Target Processes * IV. Experimental Studies and Projects * ADTT Experiments at the Los Alamos Neutron Science Center * Neutron Multiplicity Distributions for GeV Proton Induced Spallation Reactions on Thin and Thick Targets of Pb and U * Solid State Nuclear Track Detector and Radiochemical Studies on the Transmutation of Nuclei Using Relativistic Heavy Ions * Experimental and Theoretical Study of Radionuclide Production on the Electronuclear Plant Target and Construction Materials Irradiated by 1.5 GeV and 130 MeV Protons * Neutronics and Power Deposition Parameters of the Targets Proposed in the ISTC Project 17 * Multicycle Irradiation of Plutonium in Solid Fuel Heavy-Water Blanket of ADS * Compound Neutron Valve of Accelerator-Driven System Sectioned Blanket * Subcritical Channel-Type Reactor for Weapon Plutonium Utilization * Accelerator Driven Molten-Fluoride Reactor with Modular Heat Exchangers on PB-BI Eutectic * A New Conception of High Power Ion Linac for ADTT * Pions and Accelerator-Driven Transmutation of Nuclear Waste? * V. Problems and Perspectives * Accelerator-Driven Transmutation Technologies for Resolution of Long-Term Nuclear Waste Concerns * Closing the Nuclear Fuel-Cycle and Moving Toward a Sustainable Energy Development * Workshop Summary * List of Participants
Advanced Computation in Plasma Physics
NASA Astrophysics Data System (ADS)
Tang, William
2001-10-01
Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.
Accelerating evaluation of converged lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Qin, Guangzhao; Hu, Ming
2018-01-01
High-throughput computational materials design is an emerging area in materials science, which is based on the fast evaluation of physical-related properties. The lattice thermal conductivity (κ) is a key property of materials for enormous implications. However, the high-throughput evaluation of κ remains a challenge due to the large resources costs and time-consuming procedures. In this paper, we propose a concise strategy to efficiently accelerate the evaluation process of obtaining accurate and converged κ. The strategy is in the framework of phonon Boltzmann transport equation (BTE) coupled with first-principles calculations. Based on the analysis of harmonic interatomic force constants (IFCs), the large enough cutoff radius (rcutoff), a critical parameter involved in calculating the anharmonic IFCs, can be directly determined to get satisfactory results. Moreover, we find a simple way to largely ( 10 times) accelerate the computations by fast reconstructing the anharmonic IFCs in the convergence test of κ with respect to the rcutof, which finally confirms the chosen rcutoff is appropriate. Two-dimensional graphene and phosphorene along with bulk SnSe are presented to validate our approach, and the long-debate divergence problem of thermal conductivity in low-dimensional systems is studied. The quantitative strategy proposed herein can be a good candidate for fast evaluating the reliable κ and thus provides useful tool for high-throughput materials screening and design with targeted thermal transport properties.
An introduction to the physics of high energy accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, D.A.; Syphers, J.J.
1993-01-01
This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledgemore » about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.« less
NASA Technical Reports Server (NTRS)
Cranmer, Steven R.; Wagner, William (Technical Monitor)
2004-01-01
The PI (Cranmer) and Co-I (A. van Ballegooijen) made substantial progress toward the goal of producing a unified model of the basic physical processes responsible for solar wind acceleration. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a 1D model of plasma heating and acceleration. The accomplishments in Year 2 are divided into these two categories: 1a. Focused Study of Kinetic Magnetohydrodynamic (MHD) Turbulence. lb. Focused Study of Non - WKB Alfven Wave Rejection. and 2. The Unified Model Code. We have continued the development of the computational model of a time-study open flux tube in the extended corona. The proton-electron Monte Carlo model is being tested, and collisionless wave-particle interactions are being included. In order to better understand how to easily incorporate various kinds of wave-particle processes into the code, the PI performed a detailed study of the so-called "Ito Calculus", i.e., the mathematical theory of how to update the positions of particles in a probabilistic manner when their motions are governed by diffusion in velocity space.
Low-Cost Alternative for Signal Generators in the Physics Laboratory
NASA Astrophysics Data System (ADS)
Pathare, Shirish Rajan; Raghavendra, M. K.; Huli, Saurabhee
2017-05-01
Recently devices such as the optical mouse of a computer, webcams, Wii remote, and digital cameras have been used to record and analyze different physical phenomena quantitatively. Devices like tablets and smartphones are also becoming popular. Different scientific applications available at Google Play (Android devices) or the App Store (iOS devices) make them versatile. One can find many websites that provide information regarding various scientific applications compatible with these systems. A variety of smartphones/tablets are available with different types of sensors embedded. Some of them have sensors that are capable of measuring intensity of light, sound, and magnetic field. The camera of these devices has been used to study projectile motion, and the same device, along with a sensor, has been used to study the physical pendulum. Accelerometers have been used to study free and damped harmonic oscillations and to measure acceleration due to gravity. Using accelerometers and gyroscopes, angular velocity and centripetal acceleration have been measured. The coefficient of restitution for a ball bouncing on the floor has been measured using the application Oscilloscope on the iPhone. In this article, we present the use of an Android device as a low-cost alternative for a signal generator. We use the Signal Generator application installed on the Android device along with an amplifier circuit.
Neveu, Emilie; Ritchie, David W; Popov, Petr; Grudinin, Sergei
2016-09-01
Docking prediction algorithms aim to find the native conformation of a complex of proteins from knowledge of their unbound structures. They rely on a combination of sampling and scoring methods, adapted to different scales. Polynomial Expansion of Protein Structures and Interactions for Docking (PEPSI-Dock) improves the accuracy of the first stage of the docking pipeline, which will sharpen up the final predictions. Indeed, PEPSI-Dock benefits from the precision of a very detailed data-driven model of the binding free energy used with a global and exhaustive rigid-body search space. As well as being accurate, our computations are among the fastest by virtue of the sparse representation of the pre-computed potentials and FFT-accelerated sampling techniques. Overall, this is the first demonstration of a FFT-accelerated docking method coupled with an arbitrary-shaped distance-dependent interaction potential. First, we present a novel learning process to compute data-driven distant-dependent pairwise potentials, adapted from our previous method used for rescoring of putative protein-protein binding poses. The potential coefficients are learned by combining machine-learning techniques with physically interpretable descriptors. Then, we describe the integration of the deduced potentials into a FFT-accelerated spherical sampling provided by the Hex library. Overall, on a training set of 163 heterodimers, PEPSI-Dock achieves a success rate of 91% mid-quality predictions in the top-10 solutions. On a subset of the protein docking benchmark v5, it achieves 44.4% mid-quality predictions in the top-10 solutions when starting from bound structures and 20.5% when starting from unbound structures. The method runs in 5-15 min on a modern laptop and can easily be extended to other types of interactions. https://team.inria.fr/nano-d/software/PEPSI-Dock sergei.grudinin@inria.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.
Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei
2013-04-01
The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, A., E-mail: davidsoa@physics.ucla.edu; Tableman, A., E-mail: Tableman@physics.ucla.edu; An, W., E-mail: anweiming@ucla.edu
2015-01-15
For many plasma physics problems, three-dimensional and kinetic effects are very important. However, such simulations are very computationally intensive. Fortunately, there is a class of problems for which there is nearly azimuthal symmetry and the dominant three-dimensional physics is captured by the inclusion of only a few azimuthal harmonics. Recently, it was proposed [1] to model one such problem, laser wakefield acceleration, by expanding the fields and currents in azimuthal harmonics and truncating the expansion. The complex amplitudes of the fundamental and first harmonic for the fields were solved on an r–z grid and a procedure for calculating the complexmore » current amplitudes for each particle based on its motion in Cartesian geometry was presented using a Marder's correction to maintain the validity of Gauss's law. In this paper, we describe an implementation of this algorithm into OSIRIS using a rigorous charge conserving current deposition method to maintain the validity of Gauss's law. We show that this algorithm is a hybrid method which uses a particles-in-cell description in r–z and a gridless description in ϕ. We include the ability to keep an arbitrary number of harmonics and higher order particle shapes. Examples for laser wakefield acceleration, plasma wakefield acceleration, and beam loading are also presented and directions for future work are discussed.« less
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)
Li, Isaac TS; Shum, Warren; Truong, Kevin
2007-01-01
Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).
Li, Isaac T S; Shum, Warren; Truong, Kevin
2007-06-07
To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.
Frontier applications of electrostatic accelerators
NASA Astrophysics Data System (ADS)
Liu, Ke-Xin; Wang, Yu-Gang; Fan, Tie-Shuan; Zhang, Guo-Hui; Chen, Jia-Er
2013-10-01
Electrostatic accelerator is a powerful tool in many research fields, such as nuclear physics, radiation biology, material science, archaeology and earth sciences. Two electrostatic accelerators, one is the single stage Van de Graaff with terminal voltage of 4.5 MV and another one is the EN tandem with terminal voltage of 6 MV, were installed in 1980s and had been put into operation since the early 1990s at the Institute of Heavy Ion Physics. Many applications have been carried out since then. These two accelerators are described and summaries of the most important applications on neutron physics and technology, radiation biology and material science, as well as accelerator mass spectrometry (AMS) are presented.
Computing Models for FPGA-Based Accelerators
Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt
2011-01-01
Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152
Distilling free-form natural laws from experimental data.
Schmidt, Michael; Lipson, Hod
2009-04-03
For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the "alphabet" used to describe those systems.
Cloud Computing and Validated Learning for Accelerating Innovation in IoT
ERIC Educational Resources Information Center
Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus
2015-01-01
Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…
Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog
NASA Astrophysics Data System (ADS)
Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.
2011-03-01
Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.
GPU acceleration of particle-in-cell methods
NASA Astrophysics Data System (ADS)
Cowan, Benjamin; Cary, John; Meiser, Dominic
2015-11-01
Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA contract W31P4Q-15-C-0061 (SBIR).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H.; Li, G., E-mail: gli@clemson.edu
2014-08-28
An accelerated Finite Element Contact Block Reduction (FECBR) approach is presented for computational analysis of ballistic transport in nanoscale electronic devices with arbitrary geometry and unstructured mesh. Finite element formulation is developed for the theoretical CBR/Poisson model. The FECBR approach is accelerated through eigen-pair reduction, lead mode space projection, and component mode synthesis techniques. The accelerated FECBR is applied to perform quantum mechanical ballistic transport analysis of a DG-MOSFET with taper-shaped extensions and a DG-MOSFET with Si/SiO{sub 2} interface roughness. The computed electrical transport properties of the devices obtained from the accelerated FECBR approach and associated computational cost as amore » function of system degrees of freedom are compared with those obtained from the original CBR and direct inversion methods. The performance of the accelerated FECBR in both its accuracy and efficiency is demonstrated.« less
ERIC Educational Resources Information Center
Rowland, David R.
2010-01-01
A core topic in graduate courses in electrodynamics is the description of radiation from an accelerated charge and the associated radiation reaction. However, contemporary papers still express a diversity of views on the question of whether or not a uniformly accelerating charge radiates suggesting that a complete "physical" understanding of the…
Measurement of Coriolis Acceleration with a Smartphone
ERIC Educational Resources Information Center
Shaku, Asif; Kraft, Jakob
2016-01-01
Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern…
Final Technical Report for "High Energy Physics at The University of Iowa"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallik, Usha; Meurice, Yannick; Nachtman, Jane
2013-07-31
Particle Physics explores the very fundamental building blocks of our universe: the nature of forces, of space and time. By exploring very energetic collisions of sub-nuclear particles with sophisticated detectors at the colliding beam accelerators (as well as others), experimental particle physicists have established the current theory known as the Standard Model (SM), one of the several theoretical postulates to explain our everyday world. It explains all phenomena known up to a very small fraction of a second after the Big Bang to a high precision; the Higgs boson, discovered recently, was the last of the particle predicted by themore » SM. However, many other phenomena, like existence of dark energy, dark matter, absence of anti-matter, the parameters in the SM, neutrino masses etc. are not explained by the SM. So, in order to find out what lies beyond the SM, i.e., what conditions at the earliest fractions of the first second of the universe gave rise to the SM, we constructed the Large Hadron Collider (LHC) at CERN after the Tevatron collider at Fermi National Accelerator Laboratory. Each of these projects helped us push the boundary further with new insights as we explore a yet higher energy regime. The experiments are extremely complex, and as we push the boundaries of our existing knowledge, it also requires pushing the boundaries of our technical knowhow. So, not only do we pursue humankind’s most basic intellectual pursuit of knowledge, we help develop technology that benefits today’s highly technical society. Our trained Ph.D. students become experts at fast computing, manipulation of large data volumes and databases, developing cloud computing, fast electronics, advanced detector developments, and complex interfaces in several of these areas. Many of the Particle physics Ph.D.s build their careers at various technology and computing facilities, even financial institutions use some of their skills of simulation and statistical prowess. Additionally, last but not least, today’s discoveries make for tomorrow’s practical uses of an improved life style, case in point, internet technology, fiber optics, and many such things. At The University of Iowa we are involved in the LHC experiments, ATLAS and CMS, building equipment, with calibration and maintenance, supporting the infrastructure in hardware, software and analysis as well as participating in various aspects of data analyses. Our theory group works on fundamentals of field theories and on exploration of non-accelerator high energy neutrinos and possible dark matter searches.« less
Jalas, S.; Dornmair, I.; Lehe, R.; ...
2017-03-20
Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less
Ultrarelativistic electromagnetic pulses in plasmas
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, M.; Leboeuf, J. N.; Tajima, T.; Dawson, J. M.; Kennel, C. F.
1981-01-01
The physical processes of a linearly polarized electromagnetic pulse of highly relativistic amplitude in an underdense plasma accelerating particles to very high energies are studied through computer simulation. An electron-positron plasma is considered first. The maximum momenta achieved scale as the square of the wave amplitude. This acceleration stops when the bulk of the wave energy is converted to particle energy. The pulse leaves behind as a wake a vacuum region whose length scales as the amplitude of the wave. The results can be explained in terms of a snow plow or piston-like action of the radiation on the plasma. When a mass ratio other than unity is chosen and electrostatic effects begin to play a role, first the ion energy increases faster than the electron energy and then the electron energy catches up later, eventually reaching the same value.
Classical Mechanics Experiments using Wiimotes
NASA Astrophysics Data System (ADS)
Lopez, Alexander; Ochoa, Romulo
2010-02-01
The Wii, a video game console, is a very popular device. Although computationally it is not a powerful machine by today's standards, to a physics educator the controllers are its most important components. The Wiimote (or remote) controller contains a three-axis accelerometer, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, such as GlovePie, any PC or Laptop with Bluetooth capability can detect the information sent out by the Wiimote. We present experiments that use two or three Wiimotes simultaneously to measure the variable accelerations in two mass systems interacting via springs. Normal modes are determined from the data obtained. Masses and spring constants are varied to analyze their impact on the accelerations of the systems. We present the results of our experiments and compare them with those predicted using Lagrangian mechanics. )
Centripetal Acceleration: Often Forgotten or Misinterpreted
ERIC Educational Resources Information Center
Singh, Chandralekha
2009-01-01
Acceleration is a fundamental concept in physics which is taught in mechanics at all levels. Here, we discuss some challenges in teaching this concept effectively when the path along which the object is moving has a curvature and centripetal acceleration is present. We discuss examples illustrating that both physics teachers and students have…
Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime
NASA Astrophysics Data System (ADS)
Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter
2012-08-01
Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.
Electron Production and Collective Field Generation in Intense Particle Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molvik, A W; Vay, J; Cohen, R
Electron cloud effects (ECEs) are increasingly recognized as important, but incompletely understood, dynamical phenomena, which can severely limit the performance of present electron colliders, the next generation of high-intensity rings, such as PEP-II upgrade, LHC, and the SNS, the SIS 100/200, or future high-intensity heavy ion accelerators such as envisioned in Heavy Ion Inertial Fusion (HIF). Deleterious effects include ion-electron instabilities, emittance growth, particle loss, increase in vacuum pressure, added heat load at the vacuum chamber walls, and interference with certain beam diagnostics. Extrapolation of present experience to significantly higher beam intensities is uncertain given the present level of understanding.more » With coordinated LDRD projects at LLNL and LBNL, we undertook a comprehensive R&D program including experiments, theory and simulations to better understand the phenomena, establish the essential parameters, and develop mitigating mechanisms. This LDRD project laid the essential groundwork for such a program. We developed insights into the essential processes, modeled the relevant physics, and implemented these models in computational production tools that can be used for self-consistent study of the effect on ion beams. We validated the models and tools through comparison with experimental data, including data from new diagnostics that we developed as part of this work and validated on the High-Current Experiment (HCX) at LBNL. We applied these models to High-Energy Physics (HEP) and other advanced accelerators. This project was highly successful, as evidenced by the two paragraphs above, and six paragraphs following that are taken from our 2003 proposal with minor editing that mostly consisted of changing the tense. Further benchmarks of outstanding performance are: we had 13 publications with 8 of them in refereed journals, our work was recognized by the accelerator and plasma physics communities by 8 invited papers and we have 5 additional invitations for invited papers at upcoming conferences, we attracted collaborators who had SBIR funding, we are collaborating with scientists at CERN and GSI Darmstadt on gas desorption physics for submission to Physical Review Letters, and another PRL on absolute measurements of electron cloud density and Phys. Rev. ST-AB on electron emission physics are also being readied for submission.« less
Future HEP Accelerators: The US Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha; Shiltsev, Vladimir
2015-11-02
Accelerator technology has advanced tremendously since the introduction of accelerators in the 1930s, and particle accelerators have become indispensable instruments in high energy physics (HEP) research to probe Nature at smaller and smaller distances. At present, accelerator facilities can be classified into Energy Frontier colliders that enable direct discoveries and studies of high mass scale particles and Intensity Frontier accelerators for exploration of extremely rare processes, usually at relatively low energies. The near term strategies of the global energy frontier particle physics community are centered on fully exploiting the physics potential of the Large Hadron Collider (LHC) at CERN throughmore » its high-luminosity upgrade (HL-LHC), while the intensity frontier HEP research is focused on studies of neutrinos at the MW-scale beam power accelerator facilities, such as Fermilab Main Injector with the planned PIP-II SRF linac project. A number of next generation accelerator facilities have been proposed and are currently under consideration for the medium- and long-term future programs of accelerator-based HEP research. In this paper, we briefly review the post-LHC energy frontier options, both for lepton and hadron colliders in various regions of the world, as well as possible future intensity frontier accelerator facilities.« less
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Computational modelling of cosmic rays in the neighbourhood of the Sun
NASA Astrophysics Data System (ADS)
Potgieter, M. S.; Strauss, R. D.
2017-10-01
The heliosphere is defned as the plasmatic inuence sphere of the Sun and stretches far beyond the solar system. Cosmic rays, as charged particles with energy between about 1 MeV and millions of GeV, arriving from our own Galaxy and beyond, penetrate the heliosphere and encounter the solar wind and embedded magnetic feld so that when observed they contain useful information about the basic features of the heliosphere. In order to interpret these observations, obtained on and near the Earth and farther away by several space missions, and to gain understanding of the underlying physics, called heliophysics, we need to simulate the heliosphere and the acceleration, propagation and transport of these astroparticles with numerical models. These types of models vary from magnetohydrodynamic based approaches for simulating the heliosphere to using standard fnite-difference numerical schemes to solve transport-type partial differential equations with varying complexity. A large number of these models have been developed locally to do internationally competitive research and have become as such an important training tool for human capacity development in computational physics in South Africa. How these models are applied to various aspects of heliospheric space physics, with illustrative examples, is discussed in this overview.
NASA Technical Reports Server (NTRS)
Jules, Kenol; Lin, Paul P.; Weiss, Daniel S.
2002-01-01
This paper presents the preliminary performance results of the artificial intelligence monitoring system in full operational mode using near real time acceleration data downlinked from the International Space Station. Preliminary microgravity environment characterization analysis result for the International Space Station (Increment-2), using the monitoring system is presented. Also, comparison between the system predicted performance based on ground test data for the US laboratory "Destiny" module and actual on-orbit performance, using measured acceleration data from the U.S. laboratory module of the International Space Station is presented. Finally, preliminary on-orbit disturbance magnitude levels are presented for the Experiment of Physics of Colloids in Space, which are compared with on ground test data. The ground test data for the Experiment of Physics of Colloids in Space were acquired from the Microgravity Emission Laboratory, located at the NASA Glenn Research Center, Cleveland, Ohio. The artificial intelligence was developed by the NASA Glenn Principal Investigator Microgravity Services Project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment of time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a dynamic graphical display, implemented in Java, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, structural modes, etc., and decide whether or not to run their experiments, whenever that is an option, based on the acceleration magnitude and frequency sensitivity associated with that experiment. This monitoring system detects primarily the vibratory disturbance sources. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
Analysis of the cylinder’s movement characteristics after entering water based on CFD
NASA Astrophysics Data System (ADS)
Liu, Xianlong
2017-10-01
It’s a variable speed motion after the cylinder vertical entry the water. Using dynamic mesh is mostly unstructured grid, and the calculation results are not ideal and consume huge computing resources. CFD method is used to calculate the resistance of the cylinder at different velocities. Cubic spline interpolation method is used to obtain the resistance at fixed speeds. The finite difference method is used to solve the motion equation, and the acceleration, velocity, displacement and other physical quantities are obtained after the cylinder enters the water.
NASA Technical Reports Server (NTRS)
Sturrock, Peter A.
1993-01-01
The aim of the research activity was to increase our understanding of solar activity through data analysis, theoretical analysis, and computer modeling. Because the research subjects were diverse and many researchers were supported by this grant, a select few key areas of research are described in detail. Areas of research include: (1) energy storage and force-free magnetic field; (2) energy release and particle acceleration; (3) radiation by nonthermal electrons; (4) coronal loops; (5) flare classification; (6) longitude distributions of flares; (7) periodicities detected in the solar activity; (8) coronal heating and related problems; and (9) plasma processes.
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2014-01-01
Accurate estimates of electroncapture cross sections at energies relevant to the modeling of the transport, acceleration, and interaction of energetic neutral atoms (ENA) in space (approximately few MeV per nucleon) and especially for multi-electron ions must rely on detailed, but computationally expensive, quantum-mechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts for apace applications, we shall briefly present this approach along with sample applications and report on current progress.
Particle Acceleration and Heating Processes at the Dayside Magnetopause
NASA Astrophysics Data System (ADS)
Berchem, J.; Lapenta, G.; Richard, R. L.; El-Alaoui, M.; Walker, R. J.; Schriver, D.
2017-12-01
It is well established that electrons and ions are accelerated and heated during magnetic reconnection at the dayside magnetopause. However, a detailed description of the actual physical mechanisms driving these processes and where they are operating is still incomplete. Many basic mechanisms are known to accelerate particles, including resonant wave-particle interactions as well as stochastic, Fermi, and betatron acceleration. In addition, acceleration and heating processes can occur over different scales. We have carried out kinetic simulations to investigate the mechanisms by which electrons and ions are accelerated and heated at the dayside magnetopause. The simulation model uses the results of global magnetohydrodynamic (MHD) simulations to set the initial state and the evolving boundary conditions of fully kinetic implicit particle-in-cell (iPic3D) simulations for different solar wind and interplanetary magnetic field conditions. This approach allows us to include large domains both in space and energy. In particular, some of these regional simulations include both the magnetopause and bow shock in the kinetic domain, encompassing range of particle energies from a few eV in the solar wind to keV in the magnetospheric boundary layer. We analyze the results of the iPic3D simulations by discussing wave spectra and particle velocity distribution functions observed in the different regions of the simulation domain, as well as using large-scale kinetic (LSK) computations to follow particles' time histories. We discuss the relevance of our results by comparing them with local observations by the MMS spacecraft.
Centripetal Acceleration Reaction: An Effective and Robust Mechanism for Flapping Flight in Insects
Zhang, Chao; Hedrick, Tyson L.; Mittal, Rajat
2015-01-01
Despite intense study by physicists and biologists, we do not fully understand the unsteady aerodynamics that relate insect wing morphology and kinematics to lift generation. Here, we formulate a force partitioning method (FPM) and implement it within a computational fluid dynamic model to provide an unambiguous and physically insightful division of aerodynamic force into components associated with wing kinematics, vorticity, and viscosity. Application of the FPM to hawkmoth and fruit fly flight shows that the leading-edge vortex is the dominant mechanism for lift generation for both these insects and contributes between 72–85% of the net lift. However, there is another, previously unidentified mechanism, the centripetal acceleration reaction, which generates up to 17% of the net lift. The centripetal acceleration reaction is similar to the classical inviscid added-mass in that it depends only on the kinematics (i.e. accelerations) of the body, but is different in that it requires the satisfaction of the no-slip condition, and a combination of tangential motion and rotation of the wing surface. Furthermore, the classical added-mass force is identically zero for cyclic motion but this is not true of the centripetal acceleration reaction. Furthermore, unlike the lift due to vorticity, centripetal acceleration reaction lift is insensitive to Reynolds number and to environmental flow perturbations, making it an important contributor to insect flight stability and miniaturization. This force mechanism also has broad implications for flow-induced deformation and vibration, underwater locomotion and flows involving bubbles and droplets. PMID:26252016
Centripetal Acceleration Reaction: An Effective and Robust Mechanism for Flapping Flight in Insects.
Zhang, Chao; Hedrick, Tyson L; Mittal, Rajat
2015-01-01
Despite intense study by physicists and biologists, we do not fully understand the unsteady aerodynamics that relate insect wing morphology and kinematics to lift generation. Here, we formulate a force partitioning method (FPM) and implement it within a computational fluid dynamic model to provide an unambiguous and physically insightful division of aerodynamic force into components associated with wing kinematics, vorticity, and viscosity. Application of the FPM to hawkmoth and fruit fly flight shows that the leading-edge vortex is the dominant mechanism for lift generation for both these insects and contributes between 72-85% of the net lift. However, there is another, previously unidentified mechanism, the centripetal acceleration reaction, which generates up to 17% of the net lift. The centripetal acceleration reaction is similar to the classical inviscid added-mass in that it depends only on the kinematics (i.e. accelerations) of the body, but is different in that it requires the satisfaction of the no-slip condition, and a combination of tangential motion and rotation of the wing surface. Furthermore, the classical added-mass force is identically zero for cyclic motion but this is not true of the centripetal acceleration reaction. Furthermore, unlike the lift due to vorticity, centripetal acceleration reaction lift is insensitive to Reynolds number and to environmental flow perturbations, making it an important contributor to insect flight stability and miniaturization. This force mechanism also has broad implications for flow-induced deformation and vibration, underwater locomotion and flows involving bubbles and droplets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellano, T.; De Palma, L.; Laneve, D.
2015-07-01
A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2010-01-01
A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics (LQCD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negele, John W.
Building on the success of two preceding generations of Scientific Discovery through Advanced Computing (SciDAC) projects, this grant supported the MIT component (P.I. John Negele) of a multi-institutional SciDAC-3 project that also included Brookhaven National Laboratory, the lead laboratory with P. I. Frithjof Karsch serving as Project Director, Thomas Jefferson National Accelerator Facility with P. I. David Richards serving as Co-director, University of Washington with P. I. Martin Savage, University of North Carolina with P. I. Rob Fowler, and College of William and Mary with P. I. Andreas Stathopoulos. Nationally, this multi-institutional project coordinated the software development effort that themore » nuclear physics lattice QCD community needs to ensure that lattice calculations can make optimal use of forthcoming leadership-class and dedicated hardware, including that at the national laboratories, and to exploit future computational resources in the Exascale era.« less
NASA Technical Reports Server (NTRS)
Perkins, D. H.
1986-01-01
Elementary particle physics is discussed. Status of the Standard Model of electroweak and strong interactions; phenomena beyond the Standard Model; new accelerator projects; and possible contributions from non-accelerator experiments are examined.
W. W. Hansen, Microwave Physics, and Silicon Valley
NASA Astrophysics Data System (ADS)
Leeson, David
2009-03-01
The Stanford physicist W. W. Hansen (b. 1909, AB '29 and PhD '32, MIT post-doc 1933-4, Prof. physics '35-'49, d. 1949) played a seminal role in the development of microwave electronics. His contributions underlay Silicon Valley's postwar ``microwave'' phase, when numerous companies, acknowledging their unique scientific debt to Hansen, flourished around Stanford University. As had the prewar ``radio'' companies, they furthered the regional entrepreneurial culture and prepared the ground for the later semiconductor and computer developments we know as Silicon Valley. In the 1930's, Hansen invented the cavity resonator. He applied this to his concept of the radio-frequency (RF) linear accelerator and, with the Varian brothers, to the invention of the klystron, which made microwave radar practical. As WWII loomed, Hansen was asked to lecture on microwaves to the physicists recruited to the MIT Radiation Laboratory. Hansen's ``Notes on Microwaves,'' the Rad Lab ``bible'' on the subject, had a seminal impact on subsequent works, including the Rad Lab Series. Because of Hansen's failing health, his postwar work, and MIT-Stanford rivalries, the Notes were never published, languishing as an underground classic. I have located remaining copies, and will publish the Notes with a biography honoring the centenary of Hansen's birth. After the war, Hansen founded Stanford's Microwave Laboratory to develop powerful klystrons and linear accelerators. He collaborated with Felix Bloch in the discovery of nuclear magnetic resonance. Hansen experienced first-hand Stanford's evolution from its depression-era physics department to corporate, then government funding. Hansen's brilliant career was cut short by his death in 1949, after his induction in the National Academy of Sciences. His ideas were carried on in Stanford's two-mile long linear accelerator and the development of Silicon Valley.
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Convergence acceleration of viscous flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1982-01-01
A multiple-grid convergence acceleration technique introduced for application to the solution of the Euler equations by means of Lax-Wendroff algorithms is extended to treat compressible viscous flow. Computational results are presented for the solution of the thin-layer version of the Navier-Stokes equations using the explicit MacCormack algorithm, accelerated by a convective coarse-grid scheme. Extensions and generalizations are mentioned.
Gravity Modeling for Variable Fidelity Environments
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2006-01-01
Aerospace simulations can model worlds, such as the Earth, with differing levels of fidelity. The simulation may represent the world as a plane, a sphere, an ellipsoid, or a high-order closed surface. The world may or may not rotate. The user may select lower fidelity models based on computational limits, a need for simplified analysis, or comparison to other data. However, the user will also wish to retain a close semblance of behavior to the real world. The effects of gravity on objects are an important component of modeling real-world behavior. Engineers generally equate the term gravity with the observed free-fall acceleration. However, free-fall acceleration is not equal to all observers. To observers on the sur-face of a rotating world, free-fall acceleration is the sum of gravitational attraction and the centrifugal acceleration due to the world's rotation. On the other hand, free-fall acceleration equals gravitational attraction to an observer in inertial space. Surface-observed simulations (e.g. aircraft), which use non-rotating world models, may choose to model observed free fall acceleration as the gravity term; such a model actually combines gravitational at-traction with centrifugal acceleration due to the Earth s rotation. However, this modeling choice invites confusion as one evolves the simulation to higher fidelity world models or adds inertial observers. Care must be taken to model gravity in concert with the world model to avoid denigrating the fidelity of modeling observed free fall. The paper will go into greater depth on gravity modeling and the physical disparities and synergies that arise when coupling specific gravity models with world models.
The chaotic dynamical aperture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.Y.; Tepikian, S.
1985-10-01
Nonlinear magnetic forces become more important for particles in the modern large accelerators. These nonlinear elements are introduced either intentionally to control beam dynamics or by uncontrollable random errors. Equations of motion in the nonlinear Hamiltonian are usually non-integrable. Because of the nonlinear part of the Hamiltonian, the tune diagram of accelerators is a jungle. Nonlinear magnet multipoles are important in keeping the accelerator operation point in the safe quarter of the hostile jungle of resonant tunes. Indeed, all the modern accelerator design have taken advantages of nonlinear mechanics. On the other hand, the effect of the uncontrollable random multipolesmore » should be evaluated carefully. A powerful method of studying the effect of these nonlinear multipoles is using a particle tracking calculation, where a group of test particles are tracing through these magnetic multipoles in the accelerator hundreds to millions of turns in order to test the dynamical aperture of the machine. These methods are extremely useful in the design of a large accelerator such as SSC, LEP, HERA and RHIC. These calculations unfortunately take tremendous amount of computing time. In this paper, we try to apply the existing method in the nonlinear dynamics to study the possible alternative solution. When the Hamiltonian motion becomes chaotic, the tune of the machine becomes undefined. The aperture related to the chaotic orbit can be identified as chaotic dynamical aperture. We review the method of determining chaotic orbit and apply the method to nonlinear problems in accelerator physics. We then discuss the scaling properties and effect of random sextupoles.« less
Effective correlator for RadioAstron project
NASA Astrophysics Data System (ADS)
Sergeev, Sergey
This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.
Acceleration and torque feedback for robotic control - Experimental results
NASA Technical Reports Server (NTRS)
Mclnroy, John E.; Saridis, George N.
1990-01-01
Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.
NASA Astrophysics Data System (ADS)
Del McDaniel, Floyd; Doyle, Barney L.
Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry’s physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator engineers and vendors, medical doctors, cultural heritage experts... the list goes on and on. While thousands of his acquaintances already miss Jerry, this is being felt most by his family and us (B.D. and F.D.M).
NASA Astrophysics Data System (ADS)
Del McDaniel, Floyd; Doyle, Barney L.
Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry's physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator engineers and vendors, medical doctors, cultural heritage experts... the list goes on and on. While thousands of his acquaintances already miss Jerry, this is being felt most by his family and us (B.D. and F.D.M).
Yoo, Won-Gyu
2015-01-01
[Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
A new method for computing the reliability of consecutive k-out-of-n:F systems
NASA Astrophysics Data System (ADS)
Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak
2016-01-01
In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.
1996-01-01
Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2016-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.
Advanced Accelerators: Particle, Photon and Plasma Wave Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Ronald L.
2017-06-29
The overall objective of this project was to study the acceleration of electrons to very high energies over very short distances based on trapping slowly moving electrons in the fast moving potential wells of large amplitude plasma waves, which have relativistic phase velocities. These relativistic plasma waves, or wakefields, are the basis of table-top accelerators that have been shown to accelerate electrons to the same high energies as kilometer-length linear particle colliders operating using traditional decades-old acceleration techniques. The accelerating electrostatic fields of the relativistic plasma wave accelerators can be as large as GigaVolts/meter, and our goal was to studymore » techniques for remotely measuring these large fields by injecting low energy probe electron beams across the plasma wave and measuring the beam’s deflection. Our method of study was via computer simulations, and these results suggested that the deflection of the probe electron beam was directly proportional to the amplitude of the plasma wave. This is the basis of a proposed diagnostic technique, and numerous studies were performed to determine the effects of changing the electron beam, plasma wave and laser beam parameters. Further simulation studies included copropagating laser beams with the relativistic plasma waves. New interesting results came out of these studies including the prediction that very small scale electron beam bunching occurs, and an anomalous line focusing of the electron beam occurs under certain conditions. These studies were summarized in the dissertation of a graduate student who obtained the Ph.D. in physics. This past research program has motivated ideas for further research to corroborate these results using particle-in-cell simulation tools which will help design a test-of-concept experiment in our laboratory and a scaled up version for testing at a major wakefield accelerator facility.« less
USPAS | U.S. Particle Accelerator School
U.S. Particle Accelerator School U.S. Particle Accelerator School U.S. Particle Accelerator School U.S. Particle Accelerator School Education in Beam Physics and Accelerator Technology Home About About University Credits Joint International Accelerator School University-Style Programs Symposium-Style Programs
NASA Astrophysics Data System (ADS)
Zhu, B.; Lin, J.; Yuan, X.; Li, Y.; Shen, C.
2016-12-01
The role of turbulent acceleration and heating in the fractal magnetic reconnection of solar flares is still not clear, especially at the X-point in the diffusion region. At virtual test aspect, it is hardly to quantitatively analyze the vortex generation, turbulence evolution, particle acceleration and heating in the magnetic islands coalesce in fractal manner, formatting into largest plasmid and ejection process in diffusion region through classical magnetohydrodynamics numerical method. With the development of physical particle numerical method (particle in cell method [PIC], Lattice Boltzmann method [LBM]) and high performance computing technology in recently two decades. Kinetic simulation has developed into an effectively manner to exploring the role of magnetic field and electric field turbulence in charged particles acceleration and heating process, since all the physical aspects relating to turbulent reconnection are taken into account. In this paper, the LBM based lattice DxQy grid and extended distribution are added into charged-particles-to-grid-interpolation of PIC based finite difference time domain scheme and Yee Grid, the hybrid PIC-LBM simulation tool is developed to investigating turbulence acceleration on TIANHE-2. The actual solar coronal condition (L≈105Km,B≈50-500G,T≈5×106K, n≈108-109, mi/me≈500-1836) is applied to study the turbulent acceleration and heating in solar flare fractal current sheet. At stage I, magnetic islands shrink due to magnetic tension forces, the process of island shrinking halts when the kinetic energy of the accelerated particles is sufficient to halt the further collapse due to magnetic tension forces, the particle energy gain is naturally a large fraction of the released magnetic energy. At stage II and III, the particles from the energized group come in to the center of the diffusion region and stay longer in the area. In contract, the particles from non energized group only skim the outer part of the diffusion regions. At stage IV, the magnetic reconnection type nanoplasmid (200km) stop expanding and carrying enough energy to eject particles as constant velocity. Last, the role of magnetic field turbulence and electric field turbulence in electron and ion acceleration at the diffusion regions in solar flare fractural current sheet is given.
Fact Sheets and Brochures | News
Illinois Accelerator Research Center Economic Impact Particle Physics: Benefits to Society The Fermilab Saturday Morning Physics What are neutrinos? What are neutrinos? (large format) What is a Higgs boson? U.S Public Outreach America's particle physics and accelerator laboratory LBNF/DUNE - An international mega
Physics of the inner heliosphere 1-10R sub O plasma diagnostics and models
NASA Technical Reports Server (NTRS)
Withbroe, G. L.
1984-01-01
The physics of solar wind flow in the acceleration region and impulsive phenomena in the solar corona is studied. The study of magnetohydrodynamic wave propagation in the corona and the solutions for steady state and time dependent solar wind equations gives insights concerning the physics of the solar wind acceleration region, plasma heating and plasma acceleration processes and the formation of shocks. Also studied is the development of techniques for placing constraints on the mechanisms responsible for coronal heating.
The International Committee for Future Accelerators (ICFA): 1976 to the present
Rubinstein, Roy
2016-12-14
The International Committee for Future Accelerators (ICFA) has been in existence now for four decades. It plays an important role in allowing discussions by the world particle physics community on the status and future of very large particle accelerators and the particle physics and related fields associated with them. Here, this paper gives some indication of what ICFA is and does, and also describes its involvement in some of the more important developments in the particle physics field since its founding.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.
Di Staso, G; Clercx, H J H; Succi, S; Toschi, F
2016-11-13
Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Grossman, JC
2014-12-01
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastablemore » structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.
Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less
Special issue on compact x-ray sources
NASA Astrophysics Data System (ADS)
Hooker, Simon; Midorikawa, Katsumi; Rosenzweig, James
2014-04-01
Journal of Physics B: Atomic, Molecular and Optical Physics is delighted to announce a forthcoming special issue on compact x-ray sources, to appear in the winter of 2014, and invites you to submit a paper. The potential for high-brilliance x- and gamma-ray sources driven by advanced, compact accelerators has gained increasing attention in recent years. These novel sources—sometimes dubbed 'fifth generation sources'—will build on the revolutionary advance of the x-ray free-electron laser (FEL). New radiation sources of this type have widespread applications, including in ultra-fast imaging, diagnostic and therapeutic medicine, and studies of matter under extreme conditions. Rapid advances in compact accelerators and in FEL techniques make this an opportune moment to consider the opportunities which could be realized by bringing these two fields together. Further, the successful development of compact radiation sources driven by compact accelerators will be a significant milestone on the road to the development of high-gradient colliders able to operate at the frontiers of particle physics. Thus the time is right to publish a peer-reviewed collection of contributions concerning the state-of-the-art in: advanced and novel acceleration techniques; sophisticated physics at the frontier of FELs; and the underlying and enabling techniques of high brightness electron beam physics. Interdisciplinary research connecting two or more of these fields is also increasingly represented, as exemplified by entirely new concepts such as plasma based electron beam sources, and coherent imaging with fs-class electron beams. We hope that in producing this special edition of Journal of Physics B: Atomic, Molecular and Optical Physics (iopscience.iop.org/0953-4075/) we may help further a challenging mission and ongoing intellectual adventure: the harnessing of newly emergent, compact advanced accelerators to the creation of new, agile light sources with unprecedented capabilities. New schemes for compact accelerators: laser- and beam-driven plasma accelerators; dielectric laser accelerators; THz accelerators. Latest results for compact accelerators. Target design and staging of advanced accelerators. Advanced injection and phase space manipulation techniques. Novel diagnostics: single-shot measurement of sub-fs bunch duration; measurement of ultra-low emittance. Generation and characterization of incoherent radiation: betatron and undulator radiation; Thomson/Compton scattering sources, novel THz sources. Generation and characterization of coherent radiation. Novel FEL simulation techniques. Advances in simulations of novel accelerators: simulations of injection and acceleration processes; simulations of coherent and incoherent radiation sources; start-to-end simulations of fifth generation light sources. Novel undulator schemes. Novel laser drivers for laser-driven accelerators: high-repetition rate laser systems; high wall-plug efficiency systems. Applications of compact accelerators: imaging; radiography; medical applications; electron diffraction and microscopy. Please submit your article by 15 May 2014 (expected web publication: winter 2014); submissions received after this date will be considered for the journal, but may not be included in the special issue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chauvin, J. P.; Lebrat, J. F.; Soule, R.
Since 1991, the CEA has studied the physics of hybrid systems, involving a sub-critical reactor coupled with an accelerator. These studies have provided information on the potential of hybrid systems to transmute actinides and, long lived fission products. The potential of such a system remains to be proven, specifically in terms of the physical understanding of the different phenomena involved and their modelling, as well as in terms of experimental validation of coupled systems, sub-critical environment/accelerator. This validation must be achieved through mock-up studies of the sub-critical environments coupled to a source of external neutrons. The MUSE-4 mock-up experiment ismore » planed at the MASURCA facility and will use an accelerator coupled to a tritium target. The great step between the generator used in the past and the accelerator will allow to increase the knowledge in hybrid physic and to decrease the experimental biases and the measurement uncertainties.« less
Miller, Kenneth L
2005-06-01
A review of the operational health physics papers published in Health Physics and Operational Radiation Safety over the past fifteen years indicated seventeen general categories or areas into which the topics could be readily separated. These areas include academic research programs, use of computers in operational health physics, decontamination and decommissioning, dosimetry, emergency response, environmental health physics, industrial operations, medical health physics, new procedure development, non-ionizing radiation, radiation measurements, radioactive waste disposal, radon measurement and control, risk communication, shielding evaluation and specification, staffing levels for health physics programs, and unwanted or orphan sources. That is not to say that there are no operational papers dealing with specific areas of health physics, such as power reactor health physics, accelerator health physics, or governmental health physics. On the contrary, there have been a number of excellent operational papers from individuals in these specialty areas and they are included in the broader topics listed above. A listing and review of all the operational papers that have been published is beyond the scope of this discussion. However, a sampling of the excellent operational papers that have appeared in Health Physics and Operational Radiation Safety is presented to give the reader the flavor of the wide variety of concerns to the operational health physicist and the current areas of interest where procedures are being refined and solutions to problems are being developed.
Miller, Kenneth L
2005-01-01
A review of the operational health physics papers published in Health Physics and Operational Radiation Safety over the past fifteen years indicated seventeen general categories or areas into which the topics could be readily separated. These areas include academic research programs, use of computers in operational health physics, decontamination and decommissioning, dosimetry, emergency response, environmental health physics, industrial operations, medical health physics, new procedure development, non-ionizing radiation, radiation measurements, radioactive waste disposal, radon measurement and control, risk communication, shielding evaluation and specification, staffing levels for health physics programs, and unwanted or orphan sources. That is not to say that there are no operational papers dealing with specific areas of health physics, such as power reactor health physics, accelerator health physics, or governmental health physics. On the contrary, there have been a number of excellent operational papers from individuals in these specialty areas and they are included in the broader topics listed above. A listing and review of all the operational papers that have been published is beyond the scope of this discussion. However, a sampling of the excellent operational papers that have appeared in Health Physics and Operational Radiation Safety is presented to give the reader the flavor of the wide variety of concerns to the operational health physicist and the current areas of interest where procedures are being refined and solutions to problems are being developed.
Controlling under-actuated robot arms using a high speed dynamics process
NASA Technical Reports Server (NTRS)
Jain, Abhinandan (Inventor); Rodriguez, Guillermo (Inventor)
1994-01-01
The invention controls an under-actuated manipulator by first obtaining predetermined active joint accelerations of the active joints and the passive joint friction forces of the passive joints, then computing articulated body qualities for each of the joints from the current positions of the links, and finally computing from the articulated body qualities and from the active joint accelerations and the passive joint forces, active joint forces of the active joints. Ultimately, the invention transmits servo commands to the active joint forces thus computed to the respective ones of the joint servos. The computation of the active joint forces is accomplished using a recursive dynamics algorithm. In this computation, an inward recursion is first carried out for each link, beginning with the outermost link in order to compute the residual link force of each link from the active joint acceleration if the corresponding joint is active, or from the known passive joint force if the corresponding joint is passive. Then, an outward recursion is carried out for each link in which the active joint force is computed from the residual link force if the corresponding joint is active or the passive joint acceleration is computed from the residual link force if the corresponding joint is passive.
Evolution of accelerometer methods for physical activity research.
Troiano, Richard P; McClain, James J; Brychta, Robert J; Chen, Kong Y
2014-07-01
The technology and application of current accelerometer-based devices in physical activity (PA) research allow the capture and storage or transmission of large volumes of raw acceleration signal data. These rich data not only provide opportunities to improve PA characterisation, but also bring logistical and analytic challenges. We discuss how researchers and developers from multiple disciplines are responding to the analytic challenges and how advances in data storage, transmission and big data computing will minimise logistical challenges. These new approaches also bring the need for several paradigm shifts for PA researchers, including a shift from count-based approaches and regression calibrations for PA energy expenditure (PAEE) estimation to activity characterisation and EE estimation based on features extracted from raw acceleration signals. Furthermore, a collaborative approach towards analytic methods is proposed to facilitate PA research, which requires a shift away from multiple independent calibration studies. Finally, we make the case for a distinction between PA represented by accelerometer-based devices and PA assessed by self-report. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
CERN-derived analysis of lunar radiation backgrounds
NASA Technical Reports Server (NTRS)
Wilson, Thomas L.; Svoboda, Robert
1993-01-01
The Moon produces radiation which background-limits scientific experiments there. Early analyses of these backgrounds have either failed to take into consideration the effect of charm in particle physics (because they pre-dated its discovery), or have used branching ratios which are no longer strictly valid (due to new accelerator data). We are presently investigating an analytical program for deriving muon and neutrino spectra generated by the Moon, converting an existing CERN computer program known as GEANT which does the same for the Earth. In so doing, this will (1) determine an accurate prompt neutrino spectrum produced by the lunar surface; (2) determine the lunar subsurface particle flux; (3) determine the consequence of charm production physics upon the lunar background radiation environment; and (4) provide an analytical tool for the NASA astrophysics community with which to begin an assessment of the Moon as a scientific laboratory versus its particle radiation environment. This will be done on a recurring basis with the latest experimental results of the particle data groups at Earth-based high-energy accelerators, in particular with the latest branching ratios for charmed meson decay. This will be accomplished for the first time as a full 3-dimensional simulation.
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.
NASA Astrophysics Data System (ADS)
Ptitsyna, Kseniya V.; Troitsky, Sergei V.
2010-10-01
We review basic constraints on the acceleration of ultra-high-energy (UHE) cosmic rays (CRs) in astrophysical sources, namely, the geometric (Hillas) criterion and the restrictions from radiation losses in different acceleration regimes. Using the latest available astrophysical data, we redraw the Hillas plot and find potential UHECR accelerators. For the acceleration in the central engines of active galactic nuclei, we constrain the maximal UHECR energy for a given black hole mass. Among active galaxies, only the most powerful ones, radio galaxies and blazars, are able to accelerate protons to UHE, although acceleration of heavier nuclei is possible in much more abundant lower-power Seyfert galaxies.
ERIC Educational Resources Information Center
Nagasinghe, Iranga
2010-01-01
This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…
Intervention for an Adolescent With Cerebral Palsy During Period of Accelerated Growth.
Reubens, Rebecca; Silkwood-Sherer, Debbie J
2016-01-01
The purpose of this case report was to describe changes in body functions and structures, activities, and participation after a biweekly 10-week program of home physical therapy and hippotherapy using a weighted compressor belt. A 13-year-old boy with spastic diplegic cerebral palsy, Gross Motor Function Classification System level II, was referred because of accelerated growth and functional impairments that limited daily activities. The Modified Ashworth Scale, passive range of motion, 1-Minute Walk Test, Timed Up and Down Stairs, Pediatric Balance Scale, Pediatric Evaluation of Disability Inventory Computer Adaptive Test, and Dimensions of Mastery Questionnaire 17 were examined at baseline, 5, and 10 weeks. Data at 5 and 10 weeks demonstrated positive changes in passive range of motion, balance, strength, functional activities, and motivation, with additional improvements in endurance and speed after 10 weeks. This report reveals enhanced body functions and structures and activities and improved participation and motivation.
Neural representation of orientation relative to gravity in the macaque cerebellum
Laurens, Jean; Meng, Hui; Angelaki, Dora E.
2013-01-01
Summary A fundamental challenge for maintaining spatial orientation and interacting with the world is knowledge of our orientation relative to gravity, i.e. tilt. Sensing gravity is complicated because of Einstein’s equivalence principle, where gravitational and translational accelerations are physically indistinguishable. Theory has proposed that this ambiguity is solved by tracking head tilt through multisensory integration. Here we identify a group of Purkinje cells in the caudal cerebellar vermis with responses that reflect an estimate of head tilt. These tilt-selective cells are complementary to translation-selective Purkinje cells, such that their population activities sum to the net gravito-inertial acceleration encoded by the otolith organs, as predicted by theory. These findings reflect the remarkable ability of the cerebellum for neural computation and provide novel quantitative evidence for a neural representation of gravity, whose calculation relies on long-postulated theoretical concepts such as internal models and Bayesian priors. PMID:24360549
Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermi Research Alliance; Northern Illinois University
2015-07-15
Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sendsmore » the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1996-01-01
Papers from the sixteenth biennial Particle Accelerator Conference, an international forum on accelerator science and technology held May 1–5, 1995, in Dallas, Texas, organized by Los Alamos National Laboratory (LANL) and Stanford Linear Accelerator Center (SLAC), jointly sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Nuclear and Plasma Sciences Society (NPSS), the American Physical Society (APS) Division of Particles and Beams (DPB), and the International Union of Pure and Applied Physics (IUPAP), and conducted with support from the US Department of Energy, the National Science Foundation, and the Office of Naval Research.
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
Accelerator science in medical physics.
Peach, K; Wilson, P; Jones, B
2011-12-01
The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future.
Mean-state acceleration of cloud-resolving models and large eddy simulations
Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.
2015-10-29
In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate themore » evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.« less
Limiting technologies for particle beams and high energy physics
NASA Astrophysics Data System (ADS)
Panofsky, W. K. H.
1985-07-01
Since 1930 the energy of accelerators had grown by an order of magnitude roughly every 7 years. Like all exponential growths, be they human population, the size of computers, or anything else, this eventually will have to come to an end. When will this happen to the growth of the energy of particle accelerators and colliders? Fortunately, as the energy of accelerators has grown the cost per unit energy has decreased almost as fast as has the increase in energy. The result is that while the energy has increased so dramatically the cost per new installation has increased only by roughly an order of magnitude since the 1930's (corrected for inflation), while the number of accelerators operating at the frontier of the field has shrunk. As is shown in the by now familiar Livingston chart this dramatic decrease in cost has been achieved largely by a succession of new technologies, in addition to the more moderate gains in efficiency due to improved design, economies of scale, etc. We are therefore facing two questions: (1) Is there good reason scientifically to maintain the exponential growth, and (2) Are there new technologies in sight which promise continued decreases in unit costs. The answer to the first question is definitely yes; the answer to the second question is maybe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Alexander Wu; /SLAC
2012-03-01
As accelerator technology advances, the requirements on accelerator beam quality become increasingly demanding. Facing these new demands, the topic of phase space gymnastics is becoming a new focus of accelerator physics R&D. In a phase space gymnastics, the beam's phase space distribution is manipulated and precision tailored to meet the required beam qualities. On the other hand, all realization of such gymnastics will have to obey accelerator physics principles as well as technological limitations. Recent examples of phase space gymnastics include Emittance exchanges, Phase space exchanges, Emittance partitioning, Seeded FELs and Microbunched beams. The emittance related topics of this listmore » are reviewed in this report. The accelerator physics basis, the optics design principles that provide these phase space manipulations, and the possible applications of these gymnastics, are discussed. This fascinating new field promises to be a powerful tool of the future.« less
Accelerator Technology Division annual report, FY 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1990-06-01
This paper discusses: accelerator physics and special projects; experiments and injectors; magnetic optics and beam diagnostics; accelerator design and engineering; radio-frequency technology; accelerator theory and simulation; free-electron laser technology; accelerator controls and automation; and high power microwave sources and effects.
Particle acceleration, transport and turbulence in cosmic and heliospheric physics
NASA Technical Reports Server (NTRS)
Matthaeus, W.
1992-01-01
In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.
Laboratory laser acceleration and high energy astrophysics: {gamma}-ray bursts and cosmic rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajima, T.; Takahashi, Y.
1998-08-20
Recent experimental progress in laser acceleration of charged particles (electrons) and its associated processes has shown that intense electromagnetic pulses can promptly accelerate charged particles to high energies and that their energy spectrum is quite hard. On the other hand some of the high energy astrophysical phenomena such as extremely high energy cosmic rays and energetic components of {gamma}-ray bursts cry for new physical mechanisms for promptly accelerating particles to high energies. The authors suggest that the basic physics involved in laser acceleration experiments sheds light on some of the underlying mechanisms and their energy spectral characteristics of the promptlymore » accelerated particles in these high energy astrophysical phenomena.« less
The nature of the (visualization) game: Challenges and opportunities from computational geophysics
NASA Astrophysics Data System (ADS)
Kellogg, L. H.
2016-12-01
As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them to better understand the nature of complex, multiscale geoscience data.
NASA Astrophysics Data System (ADS)
Harfst, S.; Portegies Zwart, S.; McMillan, S.
2008-12-01
We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
NASA's Microgravity Fluid Physics Program: Tolerability to Residual Accelerations
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond
1998-01-01
An overview of the NASA microgravity fluid physics program is presented. The necessary quality of a reduced-gravity environment in terms of tolerable residual acceleration or g levels is a concern that is inevitably raised for each new microgravity experiment. Methodologies have been reported in the literature that provide guidance in obtaining reasonable estimates of residual acceleration sensitivity for a broad range of fluid physics phenomena. Furthermore, a relatively large and growing database of microgravity experiments that have successfully been performed in terrestrial reduced gravity facilities and orbiting platforms exists. Similarity of experimental conditions and hardware, in some cases, lead to new experiments adopting prior experiments g-requirements. Rationale applied to other experiments can, in principle, be a valuable guide to assist new Principal Investigators, PIs, in determining the residual acceleration tolerability of their flight experiments. The availability of g-requirements rationale from prior (mu)g experiments is discussed. An example of establishing g tolerability requirements is demonstrated, using a current microgravity fluid physics flight experiment. The Fluids and Combustion Facility (FCF) which is currently manifested on the US Laboratory of the International Space Station (ISS) will provide opportunities for fluid physics and combustion experiments throughout the life of the ISS. Although the FCF is not intended to accommodate all fluid physics experiments, it is expected to meet the science requirements of approximately 80% of the new PIs that enter the microgravity fluid physics program. The residual acceleration requirements for the FCF fluid physics experiments are based on a set of fourteen reference fluid physics experiments which are discussed.
NASA Astrophysics Data System (ADS)
Sokolov, I.; van der Holst, B.; Jin, M.; Gombosi, T. I.; Taktakishvili, A.; Khazanov, G. V.
2013-12-01
In numerical simulations of the solar corona, both for the ambient state and especially for dynamical processes the most computational resources are spent for maintaining the numerical solution in the Low Solar Corona and in the transition region, where the temperature gradients are very sharp and the magnetic field has a complicated topology. The degraded computational efficiency is caused by the need in a highest resolution as well as the use of the fully three-dimensional implicit solver for electron heat conduction. On the other hand, the physical nature of the processes involved is rather simple (which still does not facilitate the numerical methods) as long as the heat fluxes as well as slow plasma motional velocities are aligned with the magnetic field. The Alfven wave turbulence, which is often believed to be the main driver of the solar wind and the main source of the coronal heating, is characterized by the Poynting flux of the waves, which is also aligned with the magnetic field. Therefore, the plasma state in any point of the three-dimensional grid in the Low Solar Corona can be found by solving a set of one-dimensional equations for the magnetic field line ('thread'), which passes through this point and connects it to the chromosphere and to the global Solar Corona. In the present paper we describe an innovative computational technology based upon the use of the magnetic-field-line-threads to find the local solution. We present the development of the AWSoM code of the University of Michigan with the field-lines-threaded Low Solar Corona. In the transition region, where the essentially kinetic description of the electron energy fluxes is required, we solve the Fokker-Plank equation on the system of threads, to achieve the physically consistent description of chromosphere evaporation. The third application for the field-lines-treaded model is the Solar Energetic Particle (SEP) acceleration and transport. Being the natural extension of the Field-Line-Advection Model for Particle Acceleration (FLAMPA), earlier suggested for a single magnetic field line advected with the plasma motion, the multiple-field-lines model allows us to simulate the SEP fluxes at multiple points of possible observation (at the Earth location, at STEREOs, at Mercury).
Message From the Editor for Contributions to the 2016 Real Time Conference Issue of TNS
NASA Astrophysics Data System (ADS)
Schmeling, Sascha Marc
2017-06-01
This issue of the IEEE Transactions on Nuclear Science (TNS) is devoted to the 20th IEEE-NPSS Real Time Conference (RT2016) on Computing Applications in Nuclear and Plasma Sciences held in Padua, Italy, in June 2016. A total of 90 papers presented at the conference were submitted for possible publication in TNS. This conference issue presents 46 papers, which have been accepted so far after a thorough peer review process. These contributions come from a very broad range of fields of application, including Astrophysics, Medical Imaging, Nuclear and Plasma Physics, Particle Accelerators, and Particle Physics Experiments. Several papers were close to being accepted but did not make it into this special issue. They will be considered for further publication.
Accelerating Climate and Weather Simulations through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark
2011-01-01
Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.
A Radiation Laboratory Curriculum Development at Western Kentucky University
NASA Astrophysics Data System (ADS)
Barzilov, Alexander P.; Novikov, Ivan S.; Womble, Phil C.
2009-03-01
We present the latest developments for the radiation laboratory curriculum at the Department of Physics and Astronomy of Western Kentucky University. During the last decade, the Applied Physics Institute (API) at WKU accumulated various equipment for radiation experimentation. This includes various neutron sources (computer controlled d-t and d-d neutron generators, and isotopic 252 Cf and PuBe sources), the set of gamma sources with various intensities, gamma detectors with various energy resolutions (NaI, BGO, GSO, LaBr and HPGe) and the 2.5-MeV Van de Graaff particle accelerator. XRF and XRD apparatuses are also available for students and members at the API. This equipment is currently used in numerous scientific and teaching activities. Members of the API also developed a set of laboratory activities for undergraduate students taking classes from the physics curriculum (Nuclear Physics, Atomic Physics, and Radiation Biophysics). Our goal is to develop a set of radiation laboratories, which will strengthen the curriculum of physics, chemistry, geology, biology, and environmental science at WKU. The teaching and research activities are integrated into real-world projects and hands-on activities to engage students. The proposed experiments and their relevance to the modern status of physical science are discussed.
Implementation of an accelerated physical examination course in a doctor of pharmacy program.
Ho, Jackie; Bidwal, Monica K; Lopes, Ingrid C; Shah, Bijal M; Ip, Eric J
2014-12-15
To describe the implementation of a 1-day accelerated physical examination course for a doctor of pharmacy program and to evaluate pharmacy students' knowledge, attitudes, and confidence in performing physical examination. Using a flipped teaching approach, course coordinators collaborated with a physician faculty member to design and develop the objectives of the course. Knowledge, attitude, and confidence survey questions were administered before and after the practical laboratory. Following the practical laboratory, knowledge improved by 8.3% (p<0.0001). Students' perceived ability and confidence to perform a physical examination significantly improved (p<0.0001). A majority of students responded that reviewing the training video (81.3%) and reading material (67.4%) prior to the practical laboratory was helpful in learning the physical examination. An accelerated physical examination course using a flipped teaching approach was successful in improving students' knowledge of, attitudes about, and confidence in using physical examination skills in pharmacy practice.
Preoperative predictors of returning to work following primary total knee arthroplasty.
Styron, Joseph F; Barsoum, Wael K; Smyth, Kathleen A; Singer, Mendel E
2011-01-05
There is little in the literature to guide clinicians in advising patients regarding their return to work following a primary total knee arthroplasty. In this study, we aimed to identify which factors are important in estimating a patient's time to return to work following primary total knee arthroplasty, how long patients can anticipate being off from work, and the types of jobs to which patients are able to return following primary total knee arthroplasty. A prospective cohort study was performed in which patients scheduled for a primary total knee arthroplasty completed a validated questionnaire preoperatively and at four to six weeks, three months, and six months postoperatively. The questionnaire assessed the patient's occupational physical demands, ability to perform job responsibilities, physical status, and motivation to return to work as well as factors that may impact his or her recovery and other workplace characteristics. Two survival analysis models were constructed to evaluate the time to return to work either at least part-time or full-time. Acceleration factors were calculated to indicate the relative percentage of time until the patient returned to work. The median time to return to work was 8.9 weeks. Patients who reported a sense of urgency about returning to work were found to return in half the time taken by other employees (acceleration factor = 0.468; p < 0.001). Other preoperative factors associated with a faster return to work included being female (acceleration factor = 0.783), self-employment (acceleration factor = 0.792), higher mental health scores (acceleration factor = 0.891), higher physical function scores (acceleration factor = 0.809), higher Functional Comorbidity Index scores (acceleration factor = 0.914), and a handicap accessible workplace (acceleration factor = 0.736). A slower return to work was associated with having less pain preoperatively (acceleration factor = 1.132), having a more physically demanding job (acceleration factor = 1.116), and receiving Workers' Compensation (acceleration factor = 4.360). Although the physical demands of a patient's job have a moderate influence on the patient's ability to return to work following a primary total knee arthroplasty, the patient's characteristics, particularly motivation, play a more important role.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billing, M. G.; Conway, J. V.; Crittenden, J. A.
Cornell's electron/positron storage ring (CESR) was modified over a series of accelerator shutdowns beginning in May 2008, which substantially improves its capability for research and development for particle accelerators. CESR's energy span from 1.8 to 5.6 GeV with both electrons and positrons makes it ideal for the study of a wide spectrum of accelerator physics issues and instrumentation related to present light sources and future lepton damping rings. Additionally a number of these are also relevant for the beam physics of proton accelerators. This paper is the third in a series of four describing the conversion of CESR to themore » test accelerator, CESRTA. The first two papers discuss the overall plan for the conversion of the storage ring to an instrument capable of studying advanced accelerator physics issues [1] and the details of the vacuum system upgrades [2]. This paper focuses on the necessary development of new instrumentation, situated in four dedicated experimental regions, capable of studying such phenomena as electron clouds (ECs) and methods to mitigate EC effects. The fourth paper in this series describes the vacuum system modifications of the superconducting wigglers to accommodate the diagnostic instrumentation for the study of EC behavior within wigglers. Lastly, while the initial studies of CESRTA focused on questions related to the International Linear Collider damping ring design, CESRTA is a very versatile storage ring, capable of studying a wide range of accelerator physics and instrumentation questions.« less
Billing, M. G.; Conway, J. V.; Crittenden, J. A.; ...
2016-04-28
Cornell's electron/positron storage ring (CESR) was modified over a series of accelerator shutdowns beginning in May 2008, which substantially improves its capability for research and development for particle accelerators. CESR's energy span from 1.8 to 5.6 GeV with both electrons and positrons makes it ideal for the study of a wide spectrum of accelerator physics issues and instrumentation related to present light sources and future lepton damping rings. Additionally a number of these are also relevant for the beam physics of proton accelerators. This paper is the third in a series of four describing the conversion of CESR to themore » test accelerator, CESRTA. The first two papers discuss the overall plan for the conversion of the storage ring to an instrument capable of studying advanced accelerator physics issues [1] and the details of the vacuum system upgrades [2]. This paper focuses on the necessary development of new instrumentation, situated in four dedicated experimental regions, capable of studying such phenomena as electron clouds (ECs) and methods to mitigate EC effects. The fourth paper in this series describes the vacuum system modifications of the superconducting wigglers to accommodate the diagnostic instrumentation for the study of EC behavior within wigglers. Lastly, while the initial studies of CESRTA focused on questions related to the International Linear Collider damping ring design, CESRTA is a very versatile storage ring, capable of studying a wide range of accelerator physics and instrumentation questions.« less
Llewellyn Hilleth Thomas: An appraisal of an under-appreciated polymath
NASA Astrophysics Data System (ADS)
Jackson, John David
2010-02-01
Llewellyn Hilleth Thomas was born in 1903 and died in 1992 at the age of 88. His name is known by most for only two things, Thomas precession and the Thomas-Fermi atom. The many other facets of his career - astrophysics, atomic and molecular physics, nonlinear problems, accelerator physics, magnetohydrodynamics, computer design principles and software and hardware - are largely unknown or forgotten. I review his whole career - his early schooling, his time at Cambridge, then Copenhagen in 1925-26, and back to Cambridge, his move to the US as an assistant professor at Ohio State University in 1929, his wartime years at the Ballistic Research Laboratory, Aberdeen Proving Grounds, then in 1946 his new career as a unique resource at IBM's Watson Scientific Computing Laboratory and Columbia University until his first retirement in 1968, and his twilight years at North Carolina State University. Although the Thomas precession and the Thomas-Fermi atom may be the jewels in his crown, his many other accomplishments add to our appreciation of this consummate applied mathematician and physicist. )
Large calculation of the flow over a hypersonic vehicle using a GPU
NASA Astrophysics Data System (ADS)
Elsen, Erich; LeGresley, Patrick; Darve, Eric
2008-12-01
Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.
Accelerating artificial intelligence with reconfigurable computing
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw
Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.
An ion accelerator for undergraduate research and teaching
NASA Astrophysics Data System (ADS)
Monce, Michael
1997-04-01
We have recently upgraded our 400kV, single beam line ion accelerator to a 1MV, multiple beam line machine. This upgrade has greatly expanded the opportunities for student involvement in the laboratory. We will describe four areas of work in which students now participate. The first is the continuing research being conducted in excitations produced in ion-molecule collisions, which recently involved the use of digital imaging. The second area of research now opened up by the new accelerator involves PIXE. We are currently beginning a cross disciplinary study of archaeological specimens using PIXE and involving students from both anthropology and physics. Finally, two beam lines from the accelerator will be used for basic work in nuclear physics: Rutherford scattering and nuclear resonances. These two nuclear physics experiments will be integrated into our sophomore-junior level, year-long course in experimental physics.
Transport, Acceleration and Spatial Access of Solar Energetic Particles
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I.; Effenberger, F.; Jin, M.; Gombosi, T. I.
2017-12-01
Solar Energetic Particles (SEPs) are a major branch of space weather. Often driven by Coronal Mass Ejections (CMEs), SEPs have a very high destructive potential, which includes but is not limited to disrupting communication systems on Earth, inflicting harmful and potentially fatal radiation doses to crew members onboard spacecraft and, in extreme cases, to people aboard high altitude flights. However, currently the research community lacks efficient tools to predict such hazardous SEP events. Such a tool would serve as the first step towards improving humanity's preparedness for SEP events and ultimately its ability to mitigate their effects. The main goal of the presented research is to develop a computational tool that provides the said capabilities and meets the community's demand. Our model has the forecasting capability and can be the basis for operational system that will provide live information on the current potential threats posed by SEPs based on observations of the Sun. The tool comprises several numerical models, which are designed to simulate different physical aspects of SEPs. The background conditions in the interplanetary medium, in particular, the Coronal Mass Ejection driving the particle acceleration, play a defining role and are simulated with the state-of-the-art MHD solver, Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme (BATS-R-US). The newly developed particle code, Multiple-Field-Line-Advection Model for Particle Acceleration (M-FLAMPA), simulates the actual transport and acceleration of SEPs and is coupled to the MHD code. The special property of SEPs, the tendency to follow magnetic lines of force, is fully taken advantage of in the computational model, which substitutes a complicated 3-D model with a multitude of 1-D models. This approach significantly simplifies computations and improves the time performance of the overall model. Also, it plays an important role of mapping the affected region by connecting it with the origin of SEPs at the solar surface. Our model incorporates the effects of the near-Sun field line meandering that affects the perpendicular transport of SEPs and can explain the occurrence of large longitudinal spread observed even in the early phases of such events.
Accelerator science and technology in Europe: EuCARD 2012
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2012-05-01
Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the third annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.
Measurement of Coriolis Acceleration with a Smartphone
NASA Astrophysics Data System (ADS)
Shakur, Asif; Kraft, Jakob
2016-05-01
Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern smartphones come with a raft of built-in sensors, we have a unique opportunity to experimentally determine the Coriolis acceleration conveniently in a pedagogically enlightening environment at modest cost by using student-owned smartphones. Here we employ the gyroscope and accelerometer in a smartphone to verify the dependence of Coriolis acceleration on the angular velocity of a rotatingtrack and the speed of the sliding smartphone.
Status and Prospects of Hirfl Experiments on Nuclear Physics
NASA Astrophysics Data System (ADS)
Xu, H. S.; Zheng, C.; Xiao, G. Q.; Zhan, W. L.; Zhou, X. H.; Zhang, Y. H.; Sun, Z. Y.; Wang, J. S.; Gan, Z. G.; Huang, W. X.; Ma, X. W.
HIRFL is an accelerator complex consisting of 3 accelerators, 2 radioactive beams lines, 1 storage rings and a number of experimental setups. The research activities at HIRFL cover the fields of radio-biology, material science, atomic physics, and nuclear physics. This report mainly concentrates on the experiments of nuclear physics with the existing and planned experimental setups such as SHANS, RIBLL1, ETF, CSRe, PISA and HPLUS at HIRFL.
OCCAM: a flexible, multi-purpose and extendable HPC cluster
NASA Astrophysics Data System (ADS)
Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.
2017-10-01
The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Church, M.; Edwards, H.; Harms, E.
2013-10-01
Fermilab is the nation’s particle physics laboratory, supported by the DOE Office of High Energy Physics (OHEP). Fermilab is a world leader in accelerators, with a demonstrated track-record— spanning four decades—of excellence in accelerator science and technology. We describe the significant opportunity to complete, in a highly leveraged manner, a unique accelerator research facility that supports the broad strategic goals in accelerator science and technology within the OHEP. While the US accelerator-based HEP program is oriented toward the Intensity Frontier, which requires modern superconducting linear accelerators and advanced highintensity storage rings, there are no accelerator test facilities that support themore » accelerator science of the Intensity Frontier. Further, nearly all proposed future accelerators for Discovery Science will rely on superconducting radiofrequency (SRF) acceleration, yet there are no dedicated test facilities to study SRF capabilities for beam acceleration and manipulation in prototypic conditions. Finally, there are a wide range of experiments and research programs beyond particle physics that require the unique beam parameters that will only be available at Fermilab’s Advanced Superconducting Test Accelerator (ASTA). To address these needs we submit this proposal for an Accelerator R&D User Facility at ASTA. The ASTA program is based on the capability provided by an SRF linac (which provides electron beams from 50 MeV to nearly 1 GeV) and a small storage ring (with the ability to store either electrons or protons) to enable a broad range of beam-based experiments to study fundamental limitations to beam intensity and to develop transformative approaches to particle-beam generation, acceleration and manipulation which cannot be done elsewhere. It will also establish a unique resource for R&D towards Energy Frontier facilities and a test-bed for SRF accelerators and high brightness beam applications in support of the OHEP mission of Accelerator Stewardship.« less
Computational Studies of Magnetic Nozzle Performance
NASA Technical Reports Server (NTRS)
Ebersohn, Frans H.; Longmier, Benjamin W.; Sheehan, John P.; Shebalin, John B.; Raja, Laxminarayan
2013-01-01
An extensive literature review of magnetic nozzle research has been performed, examining previous work, as well as a review of fundamental principles. This has allow us to catalog all basic physical mechanisms which we believe underlie the thrust generation process. Energy conversion mechanisms include the approximate conservation of the magnetic moment adiabatic invariant, generalized hall and thermoelectric acceleration, swirl acceleration, thermal energy transformation into directed kinetic energy, and Joule heating. Momentum transfer results from the interaction of the applied magnetic field with currents induced in the plasma plume., while plasma detachment mechanisms include resistive diffusion, recombination and charge exchange collisions, magnetic reconnection, loss of adiabaticity, inertial forces, current closure, and self-field detachment. We have performed a preliminary study of Hall effects on magnetic nozzle jets with weak guiding magnetic fields and weak expansions (p(sub jet) approx. = P(sub background)). The conclusion from this study is that the Hall effect creates an azimuthal rotation of the plasma jet and, more generally, creates helical structures in the induced current, velocity field, and magnetic fields. We have studied plasma jet expansion to near vacuum without a guiding magnetic field, and are presently including a guiding magnetic field using a resistive MHD solver. This research is progressing toward the implementation of a full generalized Ohm's law solver. In our paper, we will summarize the basic principle, as well as the literature survey and briefly review our previous results. Our most recent results at the time of submittal will also be included. Efforts are currently underway to construct an experiment at the University of Michigan Plasmadynamics and Electric Propulsion Laboratory (PEPL) to study magnetic nozzle physics for a RF-thruster. Our computational study will work directly with this experiment to validate the numerical model, in order to study magnetic nozzle physics and optimize magnetic nozzle design. Preliminary results from the PEPL experiment will also be presented.
ERIC Educational Resources Information Center
Mac Iver, Douglas J.; Balfanz, Robert; Plank, Stephen B.
In Talent Development Middle Schools, students needing extra help in mathematics participate in the Computer- and Team-Assisted Mathematics Acceleration (CATAMA) course. CATAMA is an innovative combination of computer-assisted instruction and structured cooperative learning that students receive in addition to their regular math course for about…
Electrostatic plasma lens for focusing negatively charged particle beams.
Goncharov, A A; Dobrovolskiy, A M; Dunets, S M; Litovko, I V; Gushenets, V I; Oks, E M
2012-02-01
We describe the current status of ongoing research and development of the electrostatic plasma lens for focusing and manipulating intense negatively charged particle beams, electrons, and negative ions. The physical principle of this kind of plasma lens is based on magnetic isolation electrons providing creation of a dynamical positive space charge cloud in shortly restricted volume propagating beam. Here, the new results of experimental investigations and computer simulations of wide-aperture, intense electron beam focusing by plasma lens with positive space charge cloud produced due to the cylindrical anode layer accelerator creating a positive ion stream towards an axis system is presented.
Symplectic modeling of beam loading in electromagnetic cavities
Abell, Dan T.; Cook, Nathan M.; Webb, Stephen D.
2017-05-22
Simulating beam loading in radio frequency accelerating structures is critical for understanding higher-order mode effects on beam dynamics, such as beam break-up instability in energy recovery linacs. Full wave simulations of beam loading in radio frequency structures are computationally expensive, and while reduced models can ignore essential physics, it can be difficult to generalize. Here, we present a self-consistent algorithm derived from the least-action principle which can model an arbitrary number of cavity eigenmodes and with a generic beam distribution. It has been implemented in our new Open Library for Investigating Vacuum Electronics (OLIVE).
Kernel and divergence techniques in high energy physics separations
NASA Astrophysics Data System (ADS)
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Can Accelerators Accelerate Learning?
NASA Astrophysics Data System (ADS)
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
The ZPIC educational code suite
NASA Astrophysics Data System (ADS)
Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.
2017-10-01
Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1975-01-01
Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.
Acceleration of neutrons in a scheme of a tautochronous mathematical pendulum (physical principles)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivlin, Lev A
We consider the physical principles of neutron acceleration through a multiple synchronous interaction with a gradient rf magnetic field in a scheme of a tautochronous mathematical pendulum. (laser applications and other aspects of quantum electronics)
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
None
2018-01-16
Take a virtual tour of the campus of Thomas Jefferson National Accelerator Facility. You can see inside our two accelerators, three experimental areas, accelerator component fabrication and testing areas, high-performance computing areas and laser labs.
Research for the Fluid Field of the Centrifugal Compressor Impeller in Accelerating Startup
NASA Astrophysics Data System (ADS)
Li, Xiaozhu; Chen, Gang; Zhu, Changyun; Qin, Guoliang
2013-03-01
In order to study the flow field in the impeller in the accelerating start-up process of centrifugal compressor, the 3-D and 1-D transient accelerated flow governing equations along streamline in the impeller of the centrifugal compressor are derived in detail, the assumption of pressure gradient distribution is presented, and the solving method for 1-D transient accelerating flow field is given based on the assumption. The solving method is achieved by programming and the computing result is obtained. It is obtained by comparison that the computing method is met with the test result. So the feasibility and effectiveness for solving accelerating start-up problem of centrifugal compressor by the solving method in this paper is proven.
Filipovic, Nenad D.
2017-01-01
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851
Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S
2017-01-01
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.
galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations
NASA Astrophysics Data System (ADS)
Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo
2017-10-01
The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.
Computer-Assisted Learning in Elementary Reading: A Randomized Control Trial
ERIC Educational Resources Information Center
Shannon, Lisa Cassidy; Styers, Mary Koenig; Wilkerson, Stephanie Baird; Peery, Elizabeth
2015-01-01
This study evaluated the efficacy of Accelerated Reader, a computer-based learning program, at improving student reading. Accelerated Reader is a progress-monitoring, assessment, and practice tool that supports classroom instruction and guides independent reading. Researchers used a randomized controlled trial to evaluate the program with 344…
NASA Technical Reports Server (NTRS)
Derkevorkian, Armen; Peterson, Lee; Kolaini, Ali R.; Hendricks, Terry J.; Nesmith, Bill J.
2016-01-01
An analytic approach is demonstrated to reveal potential pyroshock -driven dynamic effects causing power losses in the Thermo -Electric (TE) module bars of the Mars Science Laboratory (MSL) Multi -Mission Radioisotope Thermoelectric Generator (MMRTG). This study utilizes high- fidelity finite element analysis with SIERRA/PRESTO codes to estimate wave propagation effects due to large -amplitude suddenly -applied pyro shock loads in the MMRTG. A high fidelity model of the TE module bar was created with approximately 30 million degrees -of-freedom (DOF). First, a quasi -static preload was applied on top of the TE module bar, then transient tri- axial acceleration inputs were simultaneously applied on the preloaded module. The applied input acceleration signals were measured during MMRTG shock qualification tests performed at the Jet Propulsion Laboratory. An explicit finite element solver in the SIERRA/PRESTO computational environment, along with a 3000 processor parallel super -computing framework at NASA -AMES, was used for the simulation. The simulation results were investigated both qualitatively and quantitatively. The predicted shock wave propagation results provide detailed structural responses throughout the TE module bar, and key insights into the dynamic response (i.e., loads, displacements, accelerations) of critical internal spring/piston compression systems, TE materials, and internal component interfaces in the MMRTG TE module bar. They also provide confidence on the viability of this high -fidelity modeling scheme to accurately predict shock wave propagation patterns within complex structures. This analytic approach is envisioned for modeling shock sensitive hardware susceptible to intense shock environments positioned near shock separation devices in modern space vehicles and systems.
Chirped pulse inverse free-electron laser vacuum accelerator
Hartemann, Frederic V.; Baldis, Hector A.; Landahl, Eric C.
2002-01-01
A chirped pulse inverse free-electron laser (IFEL) vacuum accelerator for high gradient laser acceleration in vacuum. By the use of an ultrashort (femtosecond), ultrahigh intensity chirped laser pulse both the IFEL interaction bandwidth and accelerating gradient are increased, thus yielding large gains in a compact system. In addition, the IFEL resonance condition can be maintained throughout the interaction region by using a chirped drive laser wave. In addition, diffraction can be alleviated by taking advantage of the laser optical bandwidth with negative dispersion focusing optics to produce a chromatic line focus. The combination of these features results in a compact, efficient vacuum laser accelerator which finds many applications including high energy physics, compact table-top laser accelerator for medical imaging and therapy, material science, and basic physics.
Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Richard James; Voter, Arthur F.; Perez, Danny
Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less
Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics
Zamora, Richard James; Voter, Arthur F.; Perez, Danny; ...
2016-12-01
Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less
Current-Voltage Characteristic of Nanosecond - Duration Relativistic Electron Beam
NASA Astrophysics Data System (ADS)
Andreev, Andrey
2005-10-01
The pulsed electron-beam accelerator SINUS-6 was used to measure current-voltage characteristic of nanosecond-duration thin annular relativistic electron beam accelerated in vacuum along axis of a smooth uniform metal tube immersed into strong axial magnetic field. Results of these measurements as well as results of computer simulations performed using 3D MAGIC code show that the electron-beam current dependence on the accelerating voltage at the front of the nanosecond-duration pulse is different from the analogical dependence at the flat part of the pulse. In the steady-state (flat) part of the pulse), the measured electron-beam current is close to Fedosov current [1], which is governed by the conservation law of an electron moment flow for any constant voltage. In the non steady-state part (front) of the pulse, the electron-beam current is higher that the appropriate, for a giving voltage, steady-state (Fedosov) current. [1] A. I. Fedosov, E. A. Litvinov, S. Ya. Belomytsev, and S. P. Bugaev, ``Characteristics of electron beam formed in diodes with magnetic insulation,'' Soviet Physics Journal (A translation of Izvestiya VUZ. Fizika), vol. 20, no. 10, October 1977 (April 20, 1978), pp.1367-1368.
An FPGA computing demo core for space charge simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jinyuan; Huang, Yifei; /Fermilab
2009-01-01
In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
NASA Astrophysics Data System (ADS)
Heller, Johann; Flisgen, Thomas; van Rienen, Ursula
The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.
New Concepts and Fermilab Facilities for Antimatter Research
NASA Astrophysics Data System (ADS)
Jackson, Gerald
2008-04-01
There has long been significant interest in continuing antimatter research at the Fermi National Accelerator Laboratory. Beam kinetic energies ranging from 10 GeV all the way down to the eV scale and below are of interest. There are three physics missions currently being developed: the continuation of charmonium physics utilizing an internal target; atomic physics with in-flight generated antihydrogen atoms; and deceleration to thermal energies and paasage of antiprotons through a grating system to determine their gravitation acceleration. Non-physics missions include the study of medical applications, tests of deep-space propulsion concepts, low-risk testing of nuclear fuel elements, and active interrogation for smuggled nuclear materials in support of homeland security. This paper reviews recent beam physics and accelerator technology innovations in the development of methods and new Fermilab facilities for the above missions.
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.
Graphical processors for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-02-01
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.
Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo
Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...
2017-08-26
Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...
2017-10-17
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less
openPSTD: The open source pseudospectral time-domain method for acoustic propagation
NASA Astrophysics Data System (ADS)
Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis
2016-06-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less
An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.
2018-01-01
Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.
NASA Astrophysics Data System (ADS)
Sasaya, Tenta; Sunaguchi, Naoki; Seo, Seung-Jum; Hyodo, Kazuyuki; Zeniya, Tsutomu; Kim, Jong-Ki; Yuasa, Tetsuya
2018-04-01
Gold nanoparticles (GNPs) have recently attracted attention in nanomedicine as novel contrast agents for cancer imaging. A decisive tomographic imaging technique has not yet been established to depict the 3-D distribution of GNPs in an object. An imaging technique known as pinhole-based X-ray fluorescence computed tomography (XFCT) is a promising method that can be used to reconstruct the distribution of GNPs from the X-ray fluorescence emitted by GNPs. We address the acceleration of data acquisition in pinhole-based XFCT for preclinical use using a multiple pinhole scheme. In this scheme, multiple projections are simultaneously acquired through a multi-pinhole collimator with a 2-D detector and full-field volumetric beam to enhance the signal-to-noise ratio of the projections; this enables fast data acquisition. To demonstrate the efficacy of this method, we performed an imaging experiment using a physical phantom with an actual multi-pinhole XFCT system that was constructed using the beamline AR-NE7A at KEK. The preliminary study showed that the multi-pinhole XFCT achieved a data acquisition time of 20 min at a theoretical detection limit of approximately 0.1 Au mg/ml and at a spatial resolution of 0.4 mm.
Implementation of an Accelerated Physical Examination Course in a Doctor of Pharmacy Program
Ho, Jackie; Lopes, Ingrid C.; Shah, Bijal M.; Ip, Eric J.
2014-01-01
Objective. To describe the implementation of a 1-day accelerated physical examination course for a doctor of pharmacy program and to evaluate pharmacy students’ knowledge, attitudes, and confidence in performing physical examination. Design. Using a flipped teaching approach, course coordinators collaborated with a physician faculty member to design and develop the objectives of the course. Knowledge, attitude, and confidence survey questions were administered before and after the practical laboratory. Assessment. Following the practical laboratory, knowledge improved by 8.3% (p<0.0001). Students’ perceived ability and confidence to perform a physical examination significantly improved (p<0.0001). A majority of students responded that reviewing the training video (81.3%) and reading material (67.4%) prior to the practical laboratory was helpful in learning the physical examination. Conclusion. An accelerated physical examination course using a flipped teaching approach was successful in improving students’ knowledge of, attitudes about, and confidence in using physical examination skills in pharmacy practice. PMID:25657369
Dimension-dependent stimulated radiative interaction of a single electron quantum wavepacket
NASA Astrophysics Data System (ADS)
Gover, Avraham; Pan, Yiming
2018-06-01
In the foundation of quantum mechanics, the spatial dimensions of electron wavepacket are understood only in terms of an expectation value - the probability distribution of the particle location. One can still inquire how the quantum electron wavepacket size affects a physical process. Here we address the fundamental physics problem of particle-wave duality and the measurability of a free electron quantum wavepacket. Our analysis of stimulated radiative interaction of an electron wavepacket, accompanied by numerical computations, reveals two limits. In the quantum regime of long wavepacket size relative to radiation wavelength, one obtains only quantum-recoil multiphoton sidebands in the electron energy spectrum. In the opposite regime, the wavepacket interaction approaches the limit of classical point-particle acceleration. The wavepacket features can be revealed in experiments carried out in the intermediate regime of wavepacket size commensurate with the radiation wavelength.
Development of stable Grid service at the next generation system of KEKCC
NASA Astrophysics Data System (ADS)
Nakamura, T.; Iwai, G.; Matsunaga, H.; Murakami, K.; Sasaki, T.; Suzuki, S.; Takase, W.
2017-10-01
A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis, and MC simulation is monotonically increasing. It is not only for the case with high-energy experiments, the computing requirement from the hadron and neutrino experiments and some projects of astro-particle physics is also rapidly increasing due to the very high precision measurement. Under this situation, several projects, Belle II, T2K, ILC and KAGRA experiments supported by KEK are going to utilize Grid computing infrastructure as the main computing resource. The Grid system and services in KEK, which is already in production, are upgraded for the further stable operation at the same time of whole scale hardware replacement of KEK Central Computer System (KEKCC). The next generation system of KEKCC starts the operation from the beginning of September 2016. The basic Grid services e.g. BDII, VOMS, LFC, CREAM computing element and StoRM storage element are made by the more robust hardware configuration. Since the raw data transfer is one of the most important tasks for the KEKCC, two redundant GridFTP servers are adapted to the StoRM service instances with 40 Gbps network bandwidth on the LHCONE routing. These are dedicated to the Belle II raw data transfer to the other sites apart from the servers for the data transfer usage of the other VOs. Additionally, we prepare the redundant configuration for the database oriented services like LFC and AMGA by using LifeKeeper. The LFC servers are made by two read/write servers and two read-only servers for the Belle II experiment, and all of them have an individual database for the purpose of load balancing. The FTS3 service is newly deployed as a service for the Belle II data distribution. The service of CVMFS stratum-0 is started for the Belle II software repository, and stratum-1 service is prepared for the other VOs. In this way, there are a lot of upgrade for the real production service of Grid infrastructure at KEK Computing Research Center. In this paper, we would like to introduce the detailed configuration of the hardware for Grid instance, and several mechanisms to construct the robust Grid system in the next generation system of KEKCC.
Software package for modeling spin-orbit motion in storage rings
NASA Astrophysics Data System (ADS)
Zyuzin, D. V.
2015-12-01
A software package providing a graphical user interface for computer experiments on the motion of charged particle beams in accelerators, as well as analysis of obtained data, is presented. The software package was tested in the framework of the international project on electric dipole moment measurement JEDI (Jülich Electric Dipole moment Investigations). The specific features of particle spin motion imply the requirement to use a cyclic accelerator (storage ring) consisting of electrostatic elements, which makes it possible to preserve horizontal polarization for a long time. Computer experiments study the dynamics of 106-109 particles in a beam during 109 turns in an accelerator (about 1012-1015 integration steps for the equations of motion). For designing an optimal accelerator structure, a large number of computer experiments on polarized beam dynamics are required. The numerical core of the package is COSY Infinity, a program for modeling spin-orbit dynamics.
An acceleration framework for synthetic aperture radar algorithms
NASA Astrophysics Data System (ADS)
Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.
2017-04-01
Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.
Evidence for object permanence in the smooth-pursuit eye movements of monkeys.
Churchland, Mark M; Chou, I-Han; Lisberger, Stephen G
2003-10-01
We recorded the smooth-pursuit eye movements of monkeys in response to targets that were extinguished (blinked) for 200 ms in mid-trajectory. Eye velocity declined considerably during the target blinks, even when the blinks were completely predictable in time and space. Eye velocity declined whether blinks were presented during steady-state pursuit of a constant-velocity target, during initiation of pursuit before target velocity was reached, or during eye accelerations induced by a change in target velocity. When a physical occluder covered the trajectory of the target during blinks, creating the impression that the target moved behind it, the decline in eye velocity was reduced or abolished. If the target was occluded once the eye had reached target velocity, pursuit was only slightly poorer than normal, uninterrupted pursuit. In contrast, if the target was occluded during the initiation of pursuit, while the eye was accelerating toward target velocity, pursuit during occlusion was very different from normal pursuit. Eye velocity remained relatively stable during target occlusion, showing much less acceleration than normal pursuit and much less of a decline than was produced by a target blink. Anticipatory or predictive eye acceleration was typically observed just prior to the reappearance of the target. Computer simulations show that these results are best understood by assuming that a mechanism of eye-velocity memory remains engaged during target occlusion but is disengaged during target blinks.
Will there be energy frontier colliders after LHC?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir
2016-09-15
High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here we overview all current options for post-LHC collidersmore » from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics.« less
Compensation Techniques in Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayed, Hisham Kamal
2011-05-01
Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Twomore » problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.« less
NASA Technical Reports Server (NTRS)
Hansman, R. J., Jr.
1982-01-01
The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albright, Brian James; Yin, Lin; Stark, David James
This proposal sought of order 1M core-hours of Institutional Computing time intended to enable computing by a new LANL Postdoc (David Stark) working under LDRD ER project 20160472ER (PI: Lin Yin) on laser-ion acceleration. The project was “off-cycle,” initiating in June of 2016 with a postdoc hire.
Physical and digital simulations for IVA robotics
NASA Technical Reports Server (NTRS)
Hinman, Elaine; Workman, Gary L.
1992-01-01
Space based materials processing experiments can be enhanced through the use of IVA robotic systems. A program to determine requirements for the implementation of robotic systems in a microgravity environment and to develop some preliminary concepts for acceleration control of small, lightweight arms has been initiated with the development of physical and digital simulation capabilities. The physical simulation facilities incorporate a robotic workcell containing a Zymark Zymate II robot instrumented for acceleration measurements, which is able to perform materials transfer functions while flying on NASA's KC-135 aircraft during parabolic manuevers to simulate reduced gravity. Measurements of accelerations occurring during the reduced gravity periods will be used to characterize impacts of robotic accelerations in a microgravity environment in space. Digital simulations are being performed with TREETOPS, a NASA developed software package which is used for the dynamic analysis of systems with a tree topology. Extensive use of both simulation tools will enable the design of robotic systems with enhanced acceleration control for use in the space manufacturing environment.
Modern Computational Techniques for the HMMER Sequence Analysis
2013-01-01
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944
DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.
Kim, Lok-Won
2018-05-01
Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).
TOPICAL REVIEW: Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.; Chan, V. S.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Advances and challenges in computational plasma science
NASA Astrophysics Data System (ADS)
Tang, W. M.
2005-02-01
Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.
Heavy ion linear accelerator for radiation damage studies of materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.
A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response ofmore » the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for U-238(50+) and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.« less
Heavy ion linear accelerator for radiation damage studies of materials
NASA Astrophysics Data System (ADS)
Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.; Nolen, Jerry; Barcikowski, Albert; Pellin, Michael; Yacout, Abdellatif
2017-03-01
A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response of the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for 238U50+ and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.
Laser Wakefield Acceleration: Structural and Dynamic Studies. Final Technical Report ER40954
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downer, Michael C.
2014-04-30
Particle accelerators enable scientists to study the fundamental structure of the universe, but have become the largest and most expensive of scientific instruments. In this project, we advanced the science and technology of laser-plasma accelerators, which are thousands of times smaller and less expensive than their conventional counterparts. In a laser-plasma accelerator, a powerful laser pulse exerts light pressure on an ionized gas, or plasma, thereby driving an electron density wave, which resembles the wake behind a boat. Electrostatic fields within this plasma wake reach tens of billions of volts per meter, fields far stronger than ordinary non-plasma matter (suchmore » as the matter that a conventional accelerator is made of) can withstand. Under the right conditions, stray electrons from the surrounding plasma become trapped within these “wake-fields”, surf them, and acquire energy much faster than is possible in a conventional accelerator. Laser-plasma accelerators thus might herald a new generation of compact, low-cost accelerators for future particle physics, x-ray and medical research. In this project, we made two major advances in the science of laser-plasma accelerators. The first of these was to accelerate electrons beyond 1 gigaelectronvolt (1 GeV) for the first time. In experimental results reported in Nature Communications in 2013, about 1 billion electrons were captured from a tenuous plasma (about 1/100 of atmosphere density) and accelerated to 2 GeV within about one inch, while maintaining less than 5% energy spread, and spreading out less than ½ milliradian (i.e. ½ millimeter per meter of travel). Low energy spread and high beam collimation are important for applications of accelerators as coherent x-ray sources or particle colliders. This advance was made possible by exploiting unique properties of the Texas Petawatt Laser, a powerful laser at the University of Texas at Austin that produces pulses of 150 femtoseconds (1 femtosecond is 10-15 seconds) in duration and 150 Joules in energy (equivalent to the muzzle energy of a small pistol bullet). This duration was well matched to the natural electron density oscillation period of plasma of 1/100 atmospheric density, enabling efficient excitation of a plasma wake, while this energy was sufficient to drive a high-amplitude wake of the right shape to produce an energetic, collimated electron beam. Continuing research is aimed at increasing electron energy even further, increasing the number of electrons captured and accelerated, and developing applications of the compact, multi-GeV accelerator as a coherent, hard x-ray source for materials science, biomedical imaging and homeland security applications. The second major advance under this project was to develop new methods of visualizing the laser-driven plasma wake structures that underlie laser-plasma accelerators. Visualizing these structures is essential to understanding, optimizing and scaling laser-plasma accelerators. Yet prior to work under this project, computer simulations based on estimated initial conditions were the sole source of detailed knowledge of the complex, evolving internal structure of laser-driven plasma wakes. In this project we developed and demonstrated a suite of optical visualization methods based on well-known methods such as holography, streak cameras, and coherence tomography, but adapted to the ultrafast, light-speed, microscopic world of laser-driven plasma wakes. Our methods output images of laser-driven plasma structures in a single laser shot. We first reported snapshots of low-amplitude laser wakes in Nature Physics in 2006. We subsequently reported images of high-amplitude laser-driven plasma “bubbles”, which are important for producing electron beams with low energy spread, in Physical Review Letters in 2010. More recently, we have figured out how to image laser-driven structures that change shape while propagating in a single laser shot. The latter techniques, which use the methods of computerized tomography, were demonstrated on test objects – e.g. laser-driven filaments in air and glass – and reported in Optics Letters in 2013 and Nature Communications in 2014. Their output is a multi-frame movie rather than a snapshot. Continuing research is aimed at applying these tomographic methods directly to evolving laser-driven plasma accelerator structures in our laboratory, then, once perfected, to exporting them to plasma-based accelerator laboratories around the world as standard in-line metrology instruments.« less
Λ CDM is Consistent with SPARC Radial Acceleration Relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, B. W.; Wadsley, J. W., E-mail: kellerbw@mcmaster.ca
2017-01-20
Recent analysis of the Spitzer Photometry and Accurate Rotation Curve (SPARC) galaxy sample found a surprisingly tight relation between the radial acceleration inferred from the rotation curves and the acceleration due to the baryonic components of the disk. It has been suggested that this relation may be evidence for new physics, beyond Λ CDM . In this Letter, we show that 32 galaxies from the MUGS2 match the SPARC acceleration relation. These cosmological simulations of star-forming, rotationally supported disks were simulated with a WMAP3 Λ CDM cosmology, and match the SPARC acceleration relation with less scatter than the observational data.more » These results show that this acceleration relation is a consequence of dissipative collapse of baryons, rather than being evidence for exotic dark-sector physics or new dynamical laws.« less
Specification of the Surface Charging Environment with SHIELDS
NASA Astrophysics Data System (ADS)
Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, J. D.; Vernon, L.; Woodroffe, J. R.; Brito, T.; Toth, G.; Welling, D. T.; Yu, Y.; Albert, J.; Birn, J.; Borovsky, J.; Denton, M.; Horne, R. B.; Lemon, C.; Markidis, S.; Thomsen, M. F.; Young, S. L.
2016-12-01
Predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure, i.e. "space weather", remains a big space physics challenge. A recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- and microscale. Important physics questions related to rapid particle injection and acceleration associated with magnetospheric storms and substorms as well as plasma waves are investigated. These challenging problems are addressed using a team of world-class experts in the fields of space science and computational plasma physics, and state-of-the-art models and computational facilities. In addition to physics-based models (like RAM-SCB, BATS-R-US, and iPIC3D), new data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed. Simulations with the SHIELDS framework of the near-Earth space environment where operational satellites reside are presented. Further model development and the organization of a "Spacecraft Charging Environment Challenge" by the SHIELDS project at LANL in collaboration with the NSF Geospace Environment Modeling (GEM) Workshop and the multi-agency Community Coordinated Modeling Center (CCMC) to assess the accuracy of SCE predictions are discussed.
Graham, David F; Carty, Christopher P; Lloyd, David G; Barrett, Rod S
2017-01-01
The purpose of this study was to determine the muscular contributions to the acceleration of the whole body centre of mass (COM) of older compared to younger adults that were able to recover from forward loss of balance with a single step. Forward loss of balance was achieved by releasing participants (14 older adults and 6 younger adults) from a static whole-body forward lean angle of approximately 18 degrees. 10 older adults and 6 younger adults were able to recover with a single step and included in subsequent analysis. A scalable anatomical model consisting of 36 degrees-of-freedom was used to compute kinematics and joint moments from motion capture and force plate data. Forces for 92 muscle actuators were computed using Static Optimisation and Induced Acceleration Analysis was used to compute individual muscle contributions to the three-dimensional acceleration of the whole body COM. There were no significant differences between older and younger adults in step length, step time, 3D COM accelerations or muscle contributions to 3D COM accelerations. The stance and stepping leg Gastrocnemius and Soleus muscles were primarily responsible for the vertical acceleration experienced by the COM. The Gastrocnemius and Soleus from the stance side leg together with bilateral Hamstrings accelerated the COM forwards throughout balance recovery while the Vasti and Soleus of the stepping side leg provided the majority of braking accelerations following foot contact. The Hip Abductor muscles provided the greatest contribution to medial-lateral accelerations of the COM. Deficits in the neuromuscular control of the Gastrocnemius, Soleus, Vasti and Hip Abductors in particular could adversely influence balance recovery and may be important targets in interventions to improve balance recovery performance.
Graham, David F.; Carty, Christopher P.; Lloyd, David G.
2017-01-01
The purpose of this study was to determine the muscular contributions to the acceleration of the whole body centre of mass (COM) of older compared to younger adults that were able to recover from forward loss of balance with a single step. Forward loss of balance was achieved by releasing participants (14 older adults and 6 younger adults) from a static whole-body forward lean angle of approximately 18 degrees. 10 older adults and 6 younger adults were able to recover with a single step and included in subsequent analysis. A scalable anatomical model consisting of 36 degrees-of-freedom was used to compute kinematics and joint moments from motion capture and force plate data. Forces for 92 muscle actuators were computed using Static Optimisation and Induced Acceleration Analysis was used to compute individual muscle contributions to the three-dimensional acceleration of the whole body COM. There were no significant differences between older and younger adults in step length, step time, 3D COM accelerations or muscle contributions to 3D COM accelerations. The stance and stepping leg Gastrocnemius and Soleus muscles were primarily responsible for the vertical acceleration experienced by the COM. The Gastrocnemius and Soleus from the stance side leg together with bilateral Hamstrings accelerated the COM forwards throughout balance recovery while the Vasti and Soleus of the stepping side leg provided the majority of braking accelerations following foot contact. The Hip Abductor muscles provided the greatest contribution to medial-lateral accelerations of the COM. Deficits in the neuromuscular control of the Gastrocnemius, Soleus, Vasti and Hip Abductors in particular could adversely influence balance recovery and may be important targets in interventions to improve balance recovery performance. PMID:29069097
On the upscaling of process-based models in deltaic applications
NASA Astrophysics Data System (ADS)
Li, L.; Storms, J. E. A.; Walstra, D. J. R.
2018-03-01
Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.
Sci—Fri PM: Topics — 05: Experience with linac simulation software in a teaching environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, Marco; Harnett, Nicole; Jaffray, David
Medical linear accelerator education is usually restricted to use of academic textbooks and supervised access to accelerators. To facilitate the learning process, simulation software was developed to reproduce the effect of medical linear accelerator beam adjustments on resulting clinical photon beams. The purpose of this report is to briefly describe the method of operation of the software as well as the initial experience with it in a teaching environment. To first and higher orders, all components of medical linear accelerators can be described by analytical solutions. When appropriate calibrations are applied, these analytical solutions can accurately simulate the performance ofmore » all linear accelerator sub-components. Grouped together, an overall medical linear accelerator model can be constructed. Fifteen expressions in total were coded using MATLAB v 7.14. The program was called SIMAC. The SIMAC program was used in an accelerator technology course offered at our institution; 14 delegates attended the course. The professional breakdown of the participants was: 5 physics residents, 3 accelerator technologists, 4 regulators and 1 physics associate. The course consisted of didactic lectures supported by labs using SIMAC. At the conclusion of the course, eight of thirteen delegates were able to successfully perform advanced beam adjustments after two days of theory and use of the linac simulator program. We suggest that this demonstrates good proficiency in understanding of the accelerator physics, which we hope will translate to a better ability to understand real world beam adjustments on a functioning medical linear accelerator.« less
Unsteady Aerodynamic Force Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2016-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm. A cantilevered rectangular wing built and tested at the NASA Langley Research Center (Hampton, Virginia, USA) in 1959 is used to validate the simple approach. Unsteady aerodynamic forces as well as wing deflections, velocities, accelerations, and strains are computed using the CFL3D computational fluid dynamics (CFD) code and an MSC/NASTRAN code (MSC Software Corporation, Newport Beach, California, USA), and these CFL3D-based results are assumed as measured quantities. Based on the measured strains, wing deflections, velocities, accelerations, and aerodynamic forces are computed using the proposed approach. These computed deflections, velocities, accelerations, and unsteady aerodynamic forces are compared with the CFL3D/NASTRAN-based results. In general, computed aerodynamic forces based on the lifting surface theory in subsonic speeds are in good agreement with the target aerodynamic forces generated using CFL3D code with the Euler equation. Excellent aeroelastic responses are obtained even with unsteady strain data under the signal to noise ratio of -9.8dB. The deflections, velocities, and accelerations at each sensor location are independent of structural and aerodynamic models. Therefore, the distributed strain data together with the current proposed approaches can be used as distributed deflection, velocity, and acceleration sensors. This research demonstrates the feasibility of obtaining induced drag and lift forces through the use of distributed sensor technology with measured strain data. An active induced drag control system thus can be designed using the two computed aerodynamic forces, induced drag and lift, to improve the fuel efficiency of an aircraft. Interpolation elements between structural finite element grids and the CFD grids and centroids are successfully incorporated with the unsteady aeroelastic computation scheme. The most critical technology for the success of the proposed approach is the robust on-line parameter estimator, since the least-squares curve fitting method depends heavily on aeroelastic system frequencies and damping factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsouleas, Thomas; Decyk, Viktor
Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less
Classification accuracies of physical activities using smartphone motion sensors.
Wu, Wanmin; Dasgupta, Sanjoy; Ramirez, Ernesto E; Peterson, Carlyn; Norman, Gregory J
2012-10-05
Over the past few years, the world has witnessed an unprecedented growth in smartphone use. With sensors such as accelerometers and gyroscopes on board, smartphones have the potential to enhance our understanding of health behavior, in particular physical activity or the lack thereof. However, reliable and valid activity measurement using only a smartphone in situ has not been realized. To examine the validity of the iPod Touch (Apple, Inc.) and particularly to understand the value of using gyroscopes for classifying types of physical activity, with the goal of creating a measurement and feedback system that easily integrates into individuals' daily living. We collected accelerometer and gyroscope data for 16 participants on 13 activities with an iPod Touch, a device that has essentially the same sensors and computing platform as an iPhone. The 13 activities were sitting, walking, jogging, and going upstairs and downstairs at different paces. We extracted time and frequency features, including mean and variance of acceleration and gyroscope on each axis, vector magnitude of acceleration, and fast Fourier transform magnitude for each axis of acceleration. Different classifiers were compared using the Waikato Environment for Knowledge Analysis (WEKA) toolkit, including C4.5 (J48) decision tree, multilayer perception, naive Bayes, logistic, k-nearest neighbor (kNN), and meta-algorithms such as boosting and bagging. The 10-fold cross-validation protocol was used. Overall, the kNN classifier achieved the best accuracies: 52.3%-79.4% for up and down stair walking, 91.7% for jogging, 90.1%-94.1% for walking on a level ground, and 100% for sitting. A 2-second sliding window size with a 1-second overlap worked the best. Adding gyroscope measurements proved to be more beneficial than relying solely on accelerometer readings for all activities (with improvement ranging from 3.1% to 13.4%). Common categories of physical activity and sedentary behavior (walking, jogging, and sitting) can be recognized with high accuracies using both the accelerometer and gyroscope onboard the iPod touch or iPhone. This suggests the potential of developing just-in-time classification and feedback tools on smartphones.
Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V.; Esliger, Dale W.; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L.
2016-01-01
Objectives (1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Methods Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. Results For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Conclusions Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest. PMID:27706241
Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V; Esliger, Dale W; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L
2016-01-01
(1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest.
The conversion of CESR to operate as the Test Accelerator, CesrTA. Part 1: overview
NASA Astrophysics Data System (ADS)
Billing, M. G.
2015-07-01
Cornell's electron/positron storage ring (CESR) was modified over a series of accelerator shutdowns beginning in May 2008, which substantially improves its capability for research and development for particle accelerators. CESR's energy span from 1.8 to 5.6 GeV with both electrons and positrons makes it ideal for the study of a wide spectrum of accelerator physics issues and instrumentation related to present light sources and future lepton damping rings. Additionally a number of these are also relevant for the beam physics of proton accelerators. This paper outlines the motivation, design and conversion of CESR to a test accelerator, CESRTA, enhanced to study such subjects as low emittance tuning methods, electron cloud (EC) effects, intra-beam scattering, fast ion instabilities as well as general improvements to beam instrumentation. While the initial studies of CESRTA focussed on questions related to the International Linear Collider (ILC) damping ring design, CESRTA is a very flexible storage ring, capable of studying a wide range of accelerator physics and instrumentation questions. This paper contains the outline and the basis for a set of papers documenting the reconfiguration of the storage ring and the associated instrumentation required for the studies described above. Further details may be found in these papers.
Optimizations of Human Restraint Systems for Short-Period Acceleration
NASA Technical Reports Server (NTRS)
Payne, P. R.
1963-01-01
A restraint system's main function is to restrain its occupant when his vehicle is subjected to acceleration. If the restraint system is rigid and well-fitting (to eliminate slack) then it will transmit the vehicle acceleration to its occupant without modifying it in any way. Few present-day restraint systems are stiff enough to give this one-to-one transmission characteristic, and depending upon their dynamic characteristics and the nature of the vehicle's acceleration-time history, they will either magnify or attenuate the acceleration. Obviously an optimum restraint system will give maximum attenuation of an input acceleration. In the general case of an arbitrary acceleration input, a computer must be used to determine the optimum dynamic characteristics for the restraint system. Analytical solutions can be obtained for certain simple cases, however, and these cases are considered in this paper, after the concept of dynamic models of the human body is introduced. The paper concludes with a description of an analog computer specially developed for the Air Force to handle completely general mechanical restraint optimization programs of this type, where the acceleration input may be any arbitrary function of time.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.
Design and Flight Tests of an Adaptive Control System Employing Normal-Acceleration Command
NASA Technical Reports Server (NTRS)
McNeill, Water E.; McLean, John D.; Hegarty, Daniel M.; Heinle, Donovan R.
1961-01-01
An adaptive control system employing normal-acceleration command has been designed with the aid of an analog computer and has been flight tested. The design of the system was based on the concept of using a mathematical model in combination with a high gain and a limiter. The study was undertaken to investigate the application of a system of this type to the task of maintaining nearly constant dynamic longitudinal response of a piloted airplane over the flight envelope without relying on air data measurements for gain adjustment. The range of flight conditions investigated was between Mach numbers of 0.36 and 1.15 and altitudes of 10,000 and 40,000 feet. The final adaptive system configuration was derived from analog computer tests, in which the physical airplane control system and much of the control circuitry were included in the loop. The method employed to generate the feedback signals resulted in a model whose characteristics varied somewhat with changes in flight condition. Flight results showed that the system limited the variation in longitudinal natural frequency of the adaptive airplane to about half that of the basic airplane and that, for the subsonic cases, the damping ratio was maintained between 0.56 and 0.69. The system also automatically compensated for the transonic trim change. Objectionable features of the system were an exaggerated sensitivity of pitch attitude to gust disturbances, abnormally large pitch attitude response for a given pilot input at low speeds, and an initial delay in normal-acceleration response to pilot control at all flight conditions. The adaptive system chatter of +/-0.05 to +/-0.10 of elevon at about 9 cycles per second (resulting in a maximum airplane normal-acceleration response of from +/-0.025 g to +/- 0.035 g) was considered by the pilots to be mildly objectionable but tolerable.
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Linda
The objective of the proposal was to develop graduate student training in materials and engineering research relevant to the development of particle accelerators. Many components used in today's accelerators or storage rings are at the limit of performance. The path forward in many cases requires the development of new materials or fabrication techniques, or a novel engineering approach. Often, accelerator-based laboratories find it difficult to get top-level engineers or materials experts with the motivation to work on these problems. The three years of funding provided by this grant was used to support development of accelerator components through a multidisciplinary approachmore » that cut across the disciplinary boundaries of accelerator physics, materials science, and surface chemistry. The following results were achieved: (1) significant scientific results on fabrication of novel photocathodes, (2) application of surface science and superconducting materials expertise to accelerator problems through faculty involvement, (3) development of instrumentation for fabrication and characterization of materials for accelerator components, (4) student involvement with problems at the interface of material science and accelerator physics.« less
Seryi, Andrei
2017-12-22
Plasma wakefield acceleration is one of the most promising approaches to advancing accelerator technology. This approach offers a potential 1,000-fold or more increase in acceleration over a given distance, compared to existing accelerators. FACET, enabled by the Recovery Act funds, will study plasma acceleration, using short, intense pulses of electrons and positrons. In this lecture, the physics of plasma acceleration and features of FACET will be presented. Â
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
Summary of the Physics Opportunities Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; McDonald, K.T.
The Physics Opportunities Working Group was convened with the rather general mandate to explore physic opportunities that may arise as new accelerator technologies and facilities come into play. Five topics were considered during the workshop: QED at critical field strength, novel positron sources, crystal accelerators, suppression of beamstrahlung, and muon colliders. Of particular interest was the sense that a high energy muon collider might be technically feasible and certainly deserves serious study.
Summary of the Physics Opportunities Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; McDonald, K.T.
1992-12-01
The Physics Opportunities Working Group was convened with the rather general mandate to explore physic opportunities that may arise as new accelerator technologies and facilities come into play. Five topics were considered during the workshop: QED at critical field strength, novel positron sources, crystal accelerators, suppression of beamstrahlung, and muon colliders. Of particular interest was the sense that a high energy muon collider might be technically feasible and certainly deserves serious study.
Which Accelerates Faster--A Falling Ball or a Porsche?
ERIC Educational Resources Information Center
Rall, James D.; Abdul-Razzaq, Wathiq
2012-01-01
An introductory physics experiment has been developed to address the issues seen in conventional physics lab classes including assumption verification, technological dependencies, and real world motivation for the experiment. The experiment has little technology dependence and compares the acceleration due to gravity by using position versus time…
Teaching Physics with Basketball
NASA Astrophysics Data System (ADS)
Chanpichai, N.; Wattanakasiwich, P.
2010-07-01
Recently, technologies and computer takes important roles in learning and teaching, including physics. Advance in technologies can help us better relating physics taught in the classroom to the real world. In this study, we developed a module on teaching a projectile motion through shooting a basketball. Students learned about physics of projectile motion, and then they took videos of their classmates shooting a basketball by using the high speed camera. Then they analyzed videos by using Tracker, a video analysis and modeling tool. While working with Tracker, students learned about the relationships between three kinematics graphs. Moreover, they learned about a real projectile motion (with an air resistance) through modeling tools. Students' abilities to interpret kinematics graphs were investigated before and after the instruction by using the Test of Understanding Graphs in Kinematics (TUG-K). The maximum normalized gain or
Essay: In Memory of Robert Siemann
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Alexander W.; /SLAC
Bob Siemann came to SLAC from Cornell in 1991. With the support from Burton Richter, then Director of SLAC, he took on a leadership role to formulate an academic program in accelerator physics at SLAC and the development of its accelerator faculty. Throughout his career he championed accelerator physics as an independent academic discipline, a vision that he fought so hard for and never retreated from. He convinced Stanford University and SLAC to create a line of tenured accelerator physics faculty and over the years he also regularly taught classes at Stanford and the U.S. Particle Accelerator School. After themore » shutdown of the SSC Laboratory, I returned to SLAC in 1993 to join the accelerator faculty he was forming. He had always visualized a need to have a professional academic journal for the accelerator field, and played a pivotal role in creating the journal Physical Review Special Topics - Accelerators and Beams, now the community standard for accelerator physics after nine years of his editorship. Today, Bob's legacy of accelerator physics as an independent academic discipline continues at SLAC as well as in the community, from which we all benefit. Bob was a great experimentalist. He specialized in experimental techniques and instrumentation, but what he wanted to learn is physics. If he had to learn theory - heaven forbid - to reach that goal, he would not hesitate one second to do so. In fact, he had written several theoretical papers as results of these efforts. Now this is what I call a true experimentalist! Ultimately, however, I think it was experimental instruments that he loved most. His eyes widened when he talked about his instruments. Prompted by a question, he would proceed to a nearby blackboard, with a satisfying grin, and draw his experimental device in a careful thinking manner, then describe his experiment and educate the questioner with some insightful physics. These moments were most enjoyable, to him and the questioner alike. When I think of Bob today, it is these moments that first come to mind, and it is these moments I will miss the most. I should like to mention another curious thing about Bob, namely he had a special talent of finding persuasive arguments that went his way. It was difficult to argue with Bob because it was so difficult to win. Generally quiet otherwise, he was too good and too methodical a debater. I had never seen him losing a debate on a policy issue or in a committee setting. However, when it comes to physics, his soft spot, he occasionally let go some weakness. When so doing, he would lose the debate, but his grin revealed that the loss was more than compensated by the physics he gained together with his debater. It is hard to believe that the office around the corner is now empty. The dear colleague we have come to know, to talk to, and to seek advice from, together with the feet-on-the-desk posture and the familiar grin, are no longer there. I wonder, who will now occupy that office next? And who will continue to carry on Bob Siemann's legacy? Many of us are waiting.« less
NASA Astrophysics Data System (ADS)
George, Michael G.
Characterization of gas diffusion layers (GDLs) for polymer electrolyte membrane (PEM) fuel cells informs modeling studies and the manufacturers of next generation fuel cell materials. Identifying the physical properties related to the primary functions of the modern GDL (thermal, electrical, and mass transport) is necessary for understanding the impact of GDL design choices. X-ray micro-computed tomographic reconstructions of GDLs were studied to isolate GDL surface morphologies. Surface roughness was measured for a wide variety of samples and a sensitivity study highlighted the scale-dependence of surface roughness measurements. Furthermore, a spatially resolved distribution map of polytetrafluoroethylene (PTFE) in the microporous layer (MPL), critical for water management and mass transport, was identified and the existence of PTFE agglomerations was highlighted. Finally, the impact of accelerated degradation on GDL wettability and water transport increases in liquid water accumulation and oxygen mass transport resistance were quantified as a result of accelerated GDL degradation.
A redshift survey of IRAS galaxies. V - The acceleration on the Local Group
NASA Technical Reports Server (NTRS)
Strauss, Michael A.; Yahil, Amos; Davis, Marc; Huchra, John P.; Fisher, Karl
1992-01-01
The acceleration on the Local Group is calculated based on a full-sky redshift survey of 5288 galaxies detected by IRAS. A formalism is developed to compute the distribution function of the IRAS acceleration for a given power spectrum of initial perturbations. The computed acceleration on the Local Group points 18-28 deg from the direction of the Local Group peculiar velocity vector. The data suggest that the CMB dipole is indeed due to the motion of the Local Group, that this motion is gravitationally induced, and that the distribution of IRAS galaxies on large scales is related to that of dark matter by a simple linear biasing model.
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Propulsion Physics Under the Changing Density Field Model
NASA Technical Reports Server (NTRS)
Robertson, Glen A.
2011-01-01
To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model
Eisenbach, Markus
2017-01-01
A major impediment to deploying next-generation high-performance computational systems is the required electrical power, often measured in units of megawatts. The solution to this problem is driving the introduction of novel machine architectures, such as those employing many-core processors and specialized accelerators. In this article, we describe the use of a hybrid accelerated architecture to achieve both reduced time to solution and the associated reduction in the electrical cost for a state-of-the-art materials science computation.
NASA Astrophysics Data System (ADS)
Angius, S.; Bisegni, C.; Ciuffetti, P.; Di Pirro, G.; Foggetta, L. G.; Galletti, F.; Gargana, R.; Gioscio, E.; Maselli, D.; Mazzitelli, G.; Michelotti, A.; Orrù, R.; Pistoni, M.; Spagnoli, F.; Spigone, D.; Stecchi, A.; Tonto, T.; Tota, M. A.; Catani, L.; Di Giulio, C.; Salina, G.; Buzzi, P.; Checcucci, B.; Lubrano, P.; Piccini, M.; Fattibene, E.; Michelotto, M.; Cavallaro, S. R.; Diana, B. F.; Enrico, F.; Pulvirenti, S.
2016-01-01
The paper is aimed to present the !CHAOS open source project aimed to develop a prototype of a national private Cloud Computing infrastructure, devoted to accelerator control systems and large experiments of High Energy Physics (HEP). The !CHAOS project has been financed by MIUR (Italian Ministry of Research and Education) and aims to develop a new concept of control system and data acquisition framework by providing, with a high level of aaabstraction, all the services needed for controlling and managing a large scientific, or non-scientific, infrastructure. A beta version of the !CHAOS infrastructure will be released at the end of December 2015 and will run on private Cloud infrastructures based on OpenStack.
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Editor); Burnham, Calvin (Editor)
1995-01-01
The papers presented at the 4th International Conference Exhibition: World Congress on Superconductivity held at the Marriott Orlando World Center, Orlando, Florida, are contained in this document and encompass the research, technology, applications, funding, political, and social aspects of superconductivity. Specifically, the areas covered included: high-temperature materials; thin films; C-60 based superconductors; persistent magnetic fields and shielding; fabrication methodology; space applications; physical applications; performance characterization; device applications; weak link effects and flux motion; accelerator technology; superconductivity energy; storage; future research and development directions; medical applications; granular superconductors; wire fabrication technology; computer applications; technical and commercial challenges, and power and energy applications.
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Irwin, J.; Nosochkov, Y.
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}
Finite-element approach to Brownian dynamics of polymers.
Cyron, Christian J; Wall, Wolfgang A
2009-12-01
In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.
Remediation of Groundwater Contaminated by Nuclear Waste
NASA Astrophysics Data System (ADS)
Parker, Jack; Palumbo, Anthony
2008-07-01
A Workshop on Accelerating Development of Practical Field-Scale Bioremediation Models; An Online Meeting, 23 January to 20 February 2008; A Web-based workshop sponsored by the U.S. Department of Energy Environmental Remediation Sciences Program (DOE/ERSP) was organized in early 2008 to assess the state of the science and knowledge gaps associated with the use of computer models to facilitate remediation of groundwater contaminated by wastes from Cold War era nuclear weapons development and production. Microbially mediated biological reactions offer a potentially efficient means to treat these sites, but considerable uncertainty exists in the coupled biological, chemical, and physical processes and their mathematical representation.
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Editor); Burnham, Calvin (Editor)
1995-01-01
This document contains papers presented at the 4th International Conference Exhibition: World Congress on Superconductivity held June 27-July 1, 1994 in Orlando, Florida. These documents encompass research, technology, applications, funding, political, and social aspects of superconductivity. The areas covered included: high-temperature materials; thin films; C-60 based superconductors; persistent magnetic fields and shielding; fabrication methodology; space applications; physical applications; performance characterization; device applications; weak link effects and flux motion; accelerator technology; superconductivity energy; storage; future research and development directions; medical applications; granular superconductors; wire fabrication technology; computer applications; technical and commercial challenges; and power and energy applications.
Exploratory Research and Development Fund, FY 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-05-01
The Lawrence Berkeley Laboratory Exploratory R D Fund FY 1990 report is compiled from annual reports submitted by principal investigators following the close of the fiscal year. This report describes the projects supported and summarizes their accomplishments. It constitutes a part of an Exploratory R D Fund (ERF) planning and documentation process that includes an annual planning cycle, projection selection, implementation, and review. The research areas covered in this report are: Accelerator and fusion research; applied science; cell and molecular biology; chemical biodynamics; chemical sciences; earth sciences; engineering; information and computing sciences; materials sciences; nuclear science; physics and research medicinemore » and radiation biophysics.« less
Electron Accelerator Shielding Design of KIPT Neutron Source Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Zhaopeng; Gohar, Yousry
The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nuclear specialists. The biological shield of the accelerator building was designed to reduce the biological dose to less than 5.0e-03 mSv/h during operation. The main source of the biologicalmore » dose for the accelerator building is the photons and neutrons generated from different interactions of leaked electrons from the electron gun and the accelerator sections with the surrounding components and materials. The Monte Carlo N-particle extended code (MCNPX) was used for the shielding calculations because of its capability to perform electron-, photon-, and neutron-coupled transport simulations. The photon dose was tallied using the MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is very small, similar to 0.01 neutron for 100-MeV electron and even smaller for lower-energy electrons. This causes difficulties for the Monte Carlo analyses and consumes tremendous computation resources for tallying the neutron dose outside the shield boundary with an acceptable accuracy. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were utilized for this study. The generated neutrons were banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron dose. The weight windows variance reduction technique was also utilized for both neutron and photon dose calculations. Two shielding materials, heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary less than 5.0e-03 mSv/h during operation. The shield configuration and parameters of the accelerator building were determined and are presented in this paper. Copyright (C) 2016, Published by Elsevier Korea LLC on behalf of Korean Nuclear Society.« less
GPU-Accelerated Molecular Modeling Coming Of Age
Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.
2010-01-01
Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161
GPU-accelerated molecular modeling coming of age.
Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus
2010-09-01
Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.
Studies on the ionospheric-thermospheric coupling mechanisms using SLR
NASA Astrophysics Data System (ADS)
Panzetta, Francesca; Erdogan, Eren; Bloßfeld, Mathis; Schmidt, Michael
2016-04-01
Several Low Earth Orbiters (LEOs) have been used by different research groups to model the thermospheric neutral density distribution at various altitudes performing Precise Orbit Determination (POD) in combination with satellite accelerometry. This approach is, in principle, based on satellite drag analysis, driven by the fact that the drag force is one of the major perturbing forces acting on LEOs. The satellite drag itself is physically related to the thermospheric density. The present contribution investigates the possibility to compute the thermospheric density from Satellite Laser Ranging (SLR) observations. SLR is commonly used to compute very accurate satellite orbits. As a prerequisite, a very high precise modelling of gravitational and non-gravitational accelerations is necessary. For this investigation, a sensitivity study of SLR observations to thermospheric density variations is performed using the DGFI Orbit and Geodetic parameter estimation Software (DOGS). SLR data from satellites at altitudes lower than 500 km are processed adopting different thermospheric models. The drag coefficients which describe the interaction of the satellite surfaces with the atmosphere are analytically computed in order to obtain scaling factors purely related to the thermospheric density. The results are reported and discussed in terms of estimates of scaling coefficients of the thermospheric density. Besides, further extensions and improvements in thermospheric density modelling obtained by combining a physics-based approach with ionospheric observations are investigated. For this purpose, the coupling mechanisms between the thermosphere and ionosphere are studied.
Overview of Particle and Heavy Ion Transport Code System PHITS
NASA Astrophysics Data System (ADS)
Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit
2014-06-01
A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.
Hypersonic merged layer blunt body flows with wakes
NASA Technical Reports Server (NTRS)
Jain, Amolak C.; Dahm, Werner K.
1991-01-01
An attempt is made here to understand the basic physics of the flowfield with wake on a blunt body of revolution under hypersonic rarefied conditions. A merged layer model of flow is envisioned. Full steady-state Navier-Stokes equations in spherical polar coordinate system are computed from the surface with slip and temperature jump conditions to the free stream by the Accelerated Successive Replacement method of numerical integration. Analysis is developed for bodies of arbitrary shape, but actual computations have been carried out for a sphere and sphere-cone body. Particular attention is paid to set the limit of the onset of separation, wake closure, shear-layer impingement, formation and dissipation of the shocks in the flowfield. Validity of the results is established by comparing the present results for sphere with the corresponding results of the SOFIA code in the common region of their validity and with the experimental data.
Simulating the formation of cosmic structure.
Frenk, C S
2002-06-15
A timely combination of new theoretical ideas and observational discoveries has brought about significant advances in our understanding of cosmic evolution. Computer simulations have played a key role in these developments by providing the means to interpret astronomical data in the context of physical and cosmological theory. In the current paradigm, our Universe has a flat geometry, is undergoing accelerated expansion and is gravitationally dominated by elementary particles that make up cold dark matter. Within this framework, it is possible to simulate in a computer the emergence of galaxies and other structures from small quantum fluctuations imprinted during an epoch of inflationary expansion shortly after the Big Bang. The simulations must take into account the evolution of the dark matter as well as the gaseous processes involved in the formation of stars and other visible components. Although many unresolved questions remain, a coherent picture for the formation of cosmic structure is now beginning to emerge.
The Center for Multiscale Plasma Dynamics, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombosi, Tamas I.
The University of Michigan participated in the joint UCLA/Maryland fusion science center focused on plasma physics problems for which the traditional separation of the dynamics into microscale and macroscale processes breaks down. These processes involve large scale flows and magnetic fields tightly coupled to the small scale, kinetic dynamics of turbulence, particle acceleration and energy cascade. The interaction between these vastly disparate scales controls the evolution of the system. The enormous range of temporal and spatial scales associated with these problems renders direct simulation intractable even in computations that use the largest existing parallel computers. Our efforts focused on twomore » main problems: the development of Hall MHD solvers on solution adaptive grids and the development of solution adaptive grids using generalized coordinates so that the proper geometry of inertial confinement can be taken into account and efficient refinement strategies can be obtained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
ERIC Educational Resources Information Center
Education Commission of the States, Denver, CO.
This paper provides an overview of Accelerated Reader, a system of computerized testing and record-keeping that supplements the regular classroom reading program. Accelerated Reader's primary goal is to increase literature-based reading practice. The program offers a computer-aided reading comprehension and management program intended to motivate…
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
NASA Technical Reports Server (NTRS)
Moravec, Hans
1993-01-01
Exploration and colonization of the universe awaits, but Earth-adapted biological humans are ill-equipped to respond to the challenge. Machines have gone farther and seen more, limited though they presently are by insect-like behavior inflexibility. As they become smarter over the coming decades, space will be theirs. Organizations of robots of ever increasing intelligence and sensory and motor ability will expand and transform what they occupy, working with matter, space and time. As they grow, a smaller and smaller fraction of their territory will be undeveloped frontier. Competitive success will depend more and more on using already available matter and space in ever more refined and useful forms. The process, analogous to the miniaturization that makes today's computers a trillion times more powerful than the mechanical calculators of the past, will gradually transform all activity from grossly physical homesteading of raw nature, to minimum-energy quantum transactions of computation. The final frontier will be urbanized, ultimately into an arena where every bit of activity is a meaningful computation: the inhabited portion of the universe will be transformed into a cyberspace. Because it will use resources more efficiently, a mature cyberspace of the distant future will be effectively much bigger than the present physical universe. While only an infinitesimal fraction of existing matter and space is doing interesting work, in a well developed cyberspace every bit will be part of a relevant computation or storing a useful datum. Over time, more compact and faster ways of using space and matter will be invented, and used to restructure the cyberspace, effectively increasing the amount of computational spacetime per unit of physical spacetime. Computational speed-ups will affect the subjective experience of entities in the cyberspace in a paradoxical way. At first glimpse, there is no subjective effect, because everything, inside and outside the individual, speeds up equally. But, more subtly, speed-up produces an expansion of the cyber universe, because, as thought accelerates, more subjective time passes during the fixed (probably lightspeed) physical transit time of a message between a given pair of locations - so those fixed locations seem to grow farther apart. Also, as information storage is made continually more efficient through both denser utilization of matter and more efficient encodings, there will be increasingly more cyber-stuff between any two points. The effect may somewhat resemble the continuous-creation process in the old steady-state theory of the physical universe of Hoyle, Bondi and Gold, where hydrogen atoms appear just fast enough throughout the expanding cosmos to maintain a constant density.
Design and evaluation of a hybrid storage system in HEP environment
NASA Astrophysics Data System (ADS)
Xu, Qi; Cheng, Yaodong; Chen, Gang
2017-10-01
Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.
Pendulums, Pedagogy, and Matter: Lessons from the Editing of Newton's Principia
NASA Astrophysics Data System (ADS)
Biener, Zvi; Smeenk, Chris
Teaching Newtonian physics involves the replacement of students'' ideas about physical situations with precise concepts appropriate for mathematical applications. This paper focuses on the concepts of `matter'' and `mass''. We suggest that students, like some pre-Newtonian scientists we examine, use these terms in a way that conflicts with their Newtonian meaning. Specifically, `matter''and `mass'' indicate to them the sorts of things that are tangible,bulky, and take up space. In Newtonian mechanics, however, the terms are defined by Newton's Second Law: `mass'' is simply a measure of the acceleration generated by an impressed force. We examine the relationship between these conceptions as it was discussed by Newton and his editor, Roger Cotes, when analyzing a series of pendulum experiments. We suggest that these experiments, as well as more sophisticated computer simulations, can be used in the classroom to sufficiently differentiate the colloquial and precise meaning of these terms.
Figuring the Acceleration of the Simple Pendulum
ERIC Educational Resources Information Center
Lieberherr, Martin
2011-01-01
The centripetal acceleration has been known since Huygens' (1659) and Newton's (1684) time. The physics to calculate the acceleration of a simple pendulum has been around for more than 300 years, and a fairly complete treatise has been given by C. Schwarz in this journal. But sentences like "the acceleration is always directed towards the…
Accelerated test plan for nickel cadmium spacecraft batteries
NASA Technical Reports Server (NTRS)
Hennigan, T. J.
1973-01-01
An accelerated test matrix is outlined that includes acceptance, baseline and post-cycling tests, chemical and physical analyses, and the data analysis procedures to be used in determining the feasibility of an accelerated test for sealed, nickel cadmium cells.
Modeling landslide runout dynamics and hazards: crucial effects of initial conditions
NASA Astrophysics Data System (ADS)
Iverson, R. M.; George, D. L.
2016-12-01
Physically based numerical models can provide useful tools for forecasting landslide runout and associated hazards, but only if the models employ initial conditions and parameter values that faithfully represent the states of geological materials on slopes. Many models assume that a landslide begins from a heap of granular material poised on a slope and held in check by an imaginary dam. A computer instruction instantaneously removes the dam, unleashing a modeled landslide that accelerates under the influence of a large force imbalance. Thus, an unrealistically large initial acceleration influences all subsequent modeled motion. By contrast, most natural landslides are triggered by small perturbations of statically balanced effective stress states, which are commonly caused by rainfall, snowmelt, or earthquakes. Landslide motion begins with an infinitesimal force imbalance and commensurately small acceleration. However, a small initial force imbalance can evolve into a much larger imbalance if feedback causes a reduction in resisting forces. A well-documented source of such feedback involves dilatancy coupled to pore-pressure evolution, which may either increase or decrease effective Coulomb friction—contingent on initial conditions. Landslide dynamics models that account for this feedback include our D-Claw model (Proc. Roy. Soc. Lon., Ser. A, 2014, doi: 10.1098/rspa.2013.0819 and doi:10.1098/rspa.2013.0820) and a similar model presented by Bouchut et al. (J. Fluid Mech., 2016, doi:10.1017/jfm.2016.417). We illustrate the crucial effects of initial conditions and dilatancy coupled to pore-pressure feedback by using D-Claw to perform simple test calculations and also by computing alternative behaviors of the well-documented Oso, Washington, and West Salt Creek, Colorado, landslides of 2014. We conclude that realistic initial conditions and feedbacks are essential elements in numerical models used to forecast landslide runout dynamics and hazards.
NASA Astrophysics Data System (ADS)
Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung
2017-03-01
A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.
Software package for modeling spin–orbit motion in storage rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zyuzin, D. V., E-mail: d.zyuzin@fz-juelich.de
2015-12-15
A software package providing a graphical user interface for computer experiments on the motion of charged particle beams in accelerators, as well as analysis of obtained data, is presented. The software package was tested in the framework of the international project on electric dipole moment measurement JEDI (Jülich Electric Dipole moment Investigations). The specific features of particle spin motion imply the requirement to use a cyclic accelerator (storage ring) consisting of electrostatic elements, which makes it possible to preserve horizontal polarization for a long time. Computer experiments study the dynamics of 10{sup 6}–10{sup 9} particles in a beam during 10{supmore » 9} turns in an accelerator (about 10{sup 12}–10{sup 15} integration steps for the equations of motion). For designing an optimal accelerator structure, a large number of computer experiments on polarized beam dynamics are required. The numerical core of the package is COSY Infinity, a program for modeling spin–orbit dynamics.« less
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
NASA Technical Reports Server (NTRS)
Klein, M.; Reynolds, J.; Ricks, E.
1989-01-01
Load and stress recovery from transient dynamic studies are improved upon using an extended acceleration vector in the modal acceleration technique applied to structural analysis. Extension of the normal LTM (load transformation matrices) stress recovery to automatically compute margins of safety is presented with an application to the Hubble space telescope.
NASA Astrophysics Data System (ADS)
Poehlman, W. F. S.; Garland, Wm. J.; Stark, J. W.
1993-06-01
In an era of downsizing and a limited pool of skilled accelerator personnel from which to draw replacements for an aging workforce, the impetus to integrate intelligent computer automation into the accelerator operator's repertoire is strong. However, successful deployment of an "Operator's Companion" is not trivial. Both graphical and human factors need to be recognized as critical areas that require extra care when formulating the Companion. They include interactive graphical user's interface that mimics, for the operator, familiar accelerator controls; knowledge of acquisition phases during development must acknowledge the expert's mental model of machine operation; and automated operations must be seen as improvements to the operator's environment rather than threats of ultimate replacement. Experiences with the PACES Accelerator Operator Companion developed at two sites over the past three years are related and graphical examples are given. The scale of the work involves multi-computer control of various start-up/shutdown and tuning procedures for Model FN and KN Van de Graaff accelerators. The response from licensing agencies has been encouraging.
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
NASA Technical Reports Server (NTRS)
Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley
2017-01-01
Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang
2016-08-01
Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation. Copyright © 2016 Elsevier B.V. All rights reserved.
Gatignon, L
2018-05-01
The CERN Super Proton Synchrotron (SPS) has delivered a variety of beams to a vigorous fixed target physics program since 1978. In this paper, we restrict ourselves to the description of a few illustrative examples in the ongoing physics program at the SPS. We will outline the physics aims of the COmmon Muon Proton Apparatus for Structure and Spectroscopy (COMPASS), north area 64 (NA64), north area 62 (NA62), north area 61 (NA61), and advanced proton driven plasma wakefield acceleration experiment (AWAKE). COMPASS studies the structure of the proton and more specifically of its spin. NA64 searches for the dark photon A', which is the messenger for interactions between normal and dark matter. The NA62 experiment aims at a 10% precision measurement of the very rare decay K + → π + νν. As this decay mode can be calculated very precisely in the Standard Model, it offers a very good opportunity to look for new physics beyond the Standard Model. The NA61/SHINE experiment studies the phase transition to Quark Gluon Plasma, a state in which the quarks and gluons that form the proton and the neutron are de-confined. Finally, AWAKE investigates proton-driven wake field acceleration: a promising technique to accelerate electrons with very high accelerating gradients. The Physics Beyond Colliders study at CERN is paving the way for a significant and diversified continuation of this already rich and compelling physics program that is complementary to the one at the big colliders like the Large Hadron Collider.
NASA Astrophysics Data System (ADS)
Gatignon, L.
2018-05-01
The CERN Super Proton Synchrotron (SPS) has delivered a variety of beams to a vigorous fixed target physics program since 1978. In this paper, we restrict ourselves to the description of a few illustrative examples in the ongoing physics program at the SPS. We will outline the physics aims of the COmmon Muon Proton Apparatus for Structure and Spectroscopy (COMPASS), north area 64 (NA64), north area 62 (NA62), north area 61 (NA61), and advanced proton driven plasma wakefield acceleration experiment (AWAKE). COMPASS studies the structure of the proton and more specifically of its spin. NA64 searches for the dark photon A', which is the messenger for interactions between normal and dark matter. The NA62 experiment aims at a 10% precision measurement of the very rare decay K+ → π+νν. As this decay mode can be calculated very precisely in the Standard Model, it offers a very good opportunity to look for new physics beyond the Standard Model. The NA61/SHINE experiment studies the phase transition to Quark Gluon Plasma, a state in which the quarks and gluons that form the proton and the neutron are de-confined. Finally, AWAKE investigates proton-driven wake field acceleration: a promising technique to accelerate electrons with very high accelerating gradients. The Physics Beyond Colliders study at CERN is paving the way for a significant and diversified continuation of this already rich and compelling physics program that is complementary to the one at the big colliders like the Large Hadron Collider.
DOE Office of Scientific and Technical Information (OSTI.GOV)
England, Joel
2014-06-30
SLAC's Joel England explains how the same fabrication techniques used for silicon computer microchips allowed their team to create the new laser-driven particle accelerator chips. (SLAC Multimedia Communications)
England, Joel
2018-01-16
SLAC's Joel England explains how the same fabrication techniques used for silicon computer microchips allowed their team to create the new laser-driven particle accelerator chips. (SLAC Multimedia Communications)
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Extended Task Space Control for Robotic Manipulators
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Long, Mark K. (Inventor)
1996-01-01
The invention is a method of operating a robot in successive sampling intervals to perform a task, the robot having joints and joint actuators with actuator control loops, by decomposing the task into behavior forces, accelerations, velocities and positions of plural behaviors to be exhibited by the robot simultaneously, computing actuator accelerations of the joint actuators for the current sampling interval from both behavior forces, accelerations velocities and positions of the current sampling interval and actuator velocities and positions of the previous sampling interval, computing actuator velocities and positions of the joint actuators for the current sampling interval from the actuator velocities and positions of the previous sampling interval, and, finally, controlling the actuators in accordance with the actuator accelerations, velocities and positions of the current sampling interval. The actuator accelerations, velocities and positions of the current sampling interval are stored for use during the next sampling interval.
Covariant Uniform Acceleration
NASA Astrophysics Data System (ADS)
Friedman, Yaakov; Scarr, Tzvi
2013-04-01
We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and its acceleration is a function of the observer's acceleration and its position. We obtain an interpretation of the Lorentz-Abraham-Dirac equation as an acceleration transformation from K' to K.
White paper on nuclear astrophysics and low energy nuclear physics Part 1: Nuclear astrophysics
Arcones, Almudena; Bardayan, Dan W.; Beers, Timothy C.; ...
2016-12-28
This white paper informs the nuclear astrophysics community and funding agencies about the scientific directions and priorities of the field and provides input from this community for the 2015 Nuclear Science Long Range Plan. It also summarizes the outcome of the nuclear astrophysics town meeting that was held on August 21–23, 2014 in College Station at the campus of Texas A&M University in preparation of the NSAC Nuclear Science Long Range Plan. It also reflects the outcome of an earlier town meeting of the nuclear astrophysics community organized by the Joint Institute for Nuclear Astrophysics (JINA) on October 9–10, 2012more » Detroit, Michigan, with the purpose of developing a vision for nuclear astrophysics in light of the recent NRC decadal surveys in nuclear physics (NP2010) and astronomy (ASTRO2010). Our white paper is informed informed by the town meeting of the Association of Research at University Nuclear Accelerators (ARUNA) that took place at the University of Notre Dame on June 12–13, 2014. In summary we find that nuclear astrophysics is a modern and vibrant field addressing fundamental science questions at the intersection of nuclear physics and astrophysics. These questions relate to the origin of the elements, the nuclear engines that drive life and death of stars, and the properties of dense matter. A broad range of nuclear accelerator facilities, astronomical observatories, theory efforts, and computational capabilities are needed. Answers to long standing key questions are well within reach in the coming decade because of the developments outlined in this white paper.« less
White paper on nuclear astrophysics and low energy nuclear physics Part 1: Nuclear astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arcones, Almudena; Bardayan, Dan W.; Beers, Timothy C.
This white paper informs the nuclear astrophysics community and funding agencies about the scientific directions and priorities of the field and provides input from this community for the 2015 Nuclear Science Long Range Plan. It also summarizes the outcome of the nuclear astrophysics town meeting that was held on August 21–23, 2014 in College Station at the campus of Texas A&M University in preparation of the NSAC Nuclear Science Long Range Plan. It also reflects the outcome of an earlier town meeting of the nuclear astrophysics community organized by the Joint Institute for Nuclear Astrophysics (JINA) on October 9–10, 2012more » Detroit, Michigan, with the purpose of developing a vision for nuclear astrophysics in light of the recent NRC decadal surveys in nuclear physics (NP2010) and astronomy (ASTRO2010). Our white paper is informed informed by the town meeting of the Association of Research at University Nuclear Accelerators (ARUNA) that took place at the University of Notre Dame on June 12–13, 2014. In summary we find that nuclear astrophysics is a modern and vibrant field addressing fundamental science questions at the intersection of nuclear physics and astrophysics. These questions relate to the origin of the elements, the nuclear engines that drive life and death of stars, and the properties of dense matter. A broad range of nuclear accelerator facilities, astronomical observatories, theory efforts, and computational capabilities are needed. Answers to long standing key questions are well within reach in the coming decade because of the developments outlined in this white paper.« less
White Paper on Nuclear Astrophysics and Low Energy Nuclear Physics - Part 1. Nuclear Astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arcones, Almudena; Escher, Jutta E.; Others, M.
This white paper informs the nuclear astrophysics community and funding agencies about the scientific directions and priorities of the field and provides input from this community for the 2015 Nuclear Science Long Range Plan. It summarizes the outcome of the nuclear astrophysics town meeting that was held on August 21 - 23, 2014 in College Station at the campus of Texas A&M University in preparation of the NSAC Nuclear Science Long Range Plan. It also reflects the outcome of an earlier town meeting of the nuclear astrophysics community organized by the Joint Institute for Nuclear Astrophysics (JINA) on October 9more » - 10, 2012 Detroit, Michigan, with the purpose of developing a vision for nuclear astrophysics in light of the recent NRC decadal surveys in nuclear physics (NP2010) and astronomy (ASTRO2010). The white paper is furthermore informed by the town meeting of the Association of Research at University Nuclear Accelerators (ARUNA) that took place at the University of Notre Dame on June 12 - 13, 2014. In summary we find that nuclear astrophysics is a modern and vibrant field addressing fundamental science questions at the intersection of nuclear physics and astrophysics. These questions relate to the origin of the elements, the nuclear engines that drive life and death of stars, and the properties of dense matter. A broad range of nuclear accelerator facilities, astronomical observatories, theory efforts, and computational capabilities are needed. With the developments outlined in this white paper, answers to long-standing key questions are well within reach in the coming decade.« less
White paper on nuclear astrophysics and low energy nuclear physics Part 1: Nuclear astrophysics
NASA Astrophysics Data System (ADS)
Arcones, Almudena; Bardayan, Dan W.; Beers, Timothy C.; Bernstein, Lee A.; Blackmon, Jeffrey C.; Messer, Bronson; Brown, B. Alex; Brown, Edward F.; Brune, Carl R.; Champagne, Art E.; Chieffi, Alessandro; Couture, Aaron J.; Danielewicz, Pawel; Diehl, Roland; El-Eid, Mounib; Escher, Jutta E.; Fields, Brian D.; Fröhlich, Carla; Herwig, Falk; Hix, William Raphael; Iliadis, Christian; Lynch, William G.; McLaughlin, Gail C.; Meyer, Bradley S.; Mezzacappa, Anthony; Nunes, Filomena; O'Shea, Brian W.; Prakash, Madappa; Pritychenko, Boris; Reddy, Sanjay; Rehm, Ernst; Rogachev, Grigory; Rutledge, Robert E.; Schatz, Hendrik; Smith, Michael S.; Stairs, Ingrid H.; Steiner, Andrew W.; Strohmayer, Tod E.; Timmes, F. X.; Townsley, Dean M.; Wiescher, Michael; Zegers, Remco G. T.; Zingale, Michael
2017-05-01
This white paper informs the nuclear astrophysics community and funding agencies about the scientific directions and priorities of the field and provides input from this community for the 2015 Nuclear Science Long Range Plan. It summarizes the outcome of the nuclear astrophysics town meeting that was held on August 21-23, 2014 in College Station at the campus of Texas A&M University in preparation of the NSAC Nuclear Science Long Range Plan. It also reflects the outcome of an earlier town meeting of the nuclear astrophysics community organized by the Joint Institute for Nuclear Astrophysics (JINA) on October 9-10, 2012 Detroit, Michigan, with the purpose of developing a vision for nuclear astrophysics in light of the recent NRC decadal surveys in nuclear physics (NP2010) and astronomy (ASTRO2010). The white paper is furthermore informed by the town meeting of the Association of Research at University Nuclear Accelerators (ARUNA) that took place at the University of Notre Dame on June 12-13, 2014. In summary we find that nuclear astrophysics is a modern and vibrant field addressing fundamental science questions at the intersection of nuclear physics and astrophysics. These questions relate to the origin of the elements, the nuclear engines that drive life and death of stars, and the properties of dense matter. A broad range of nuclear accelerator facilities, astronomical observatories, theory efforts, and computational capabilities are needed. With the developments outlined in this white paper, answers to long standing key questions are well within reach in the coming decade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesztergombi, G.
Present day accelerators are working well in the multi TeV energy scale and one is expecting exciting results in the coming years. Conventional technologies, however, can offer only incremental (factor 2 or 3) increase in beam energies which does not follow the usual speed of progress in the frontiers of high energy physics. Laser plasma accelerators theoretically provide unique possibilities to achieve orders of magnitude increases entering the PetaelectronVolt (PeV) energy range. It will be discussed what kind of new perspectives could be opened for the physics at this new energy scale. What type of accelerators would be required?.
NASA Technical Reports Server (NTRS)
Holman, Gordon
2010-01-01
Accelerated electrons play an important role in the energetics of solar flares. Understanding the process or processes that accelerate these electrons to high, nonthermal energies also depends on understanding the evolution of these electrons between the acceleration region and the region where they are observed through their hard X-ray or radio emission. Energy losses in the co-spatial electric field that drives the current-neutralizing return current can flatten the electron distribution toward low energies. This in turn flattens the corresponding bremsstrahlung hard X-ray spectrum toward low energies. The lost electron beam energy also enhances heating in the coronal part of the flare loop. Extending earlier work by Knight & Sturrock (1977), Emslie (1980), Diakonov & Somov (1988), and Litvinenko & Somov (1991), I have derived analytical and semi-analytical results for the nonthermal electron distribution function and the self-consistent electric field strength in the presence of a steady-state return-current. I review these results, presented previously at the 2009 SPD Meeting in Boulder, CO, and compare them and computed X-ray spectra with numerical results obtained by Zharkova & Gordovskii (2005, 2006). The phYSical significance of similarities and differences in the results will be emphasized. This work is supported by NASA's Heliophysics Guest Investigator Program and the RHESSI Project.
NASA Technical Reports Server (NTRS)
Liu, Shih-Ching
1994-01-01
The goal of this research was to determine kinematic parameters of the lower limbs of a subject pedaling a bicycle. An existing measurement system was used as the basis to develop the model to determine position and acceleration of the limbs. The system consists of an ergometer instrumented to provide position of the pedal (foot), accelerometers to be attached to the lower limbs to measure accelerations, a recorder used for filtering, and a computer instrumented with an A/D board and a decoder board. The system is designed to read and record data from accelerometers and encoders. Software has been developed for data collection, analysis and presentation. Based on the measurement system, a two dimensional analytical model has been developed to determine configuration (position, orientation) and kinematics (velocities, accelerations). The model has been implemented in software and verified by simulation. An error analysis to determine the system's accuracy shows that the expected error is well within the specifications of practical applications. When the physical hardware is completed, NASA researchers hope to use the system developed to determine forces exerted by muscles and forces at articulations. This data will be useful in the development of countermeasures to minimize bone loss experienced by astronauts in microgravity conditions.
Kishimoto, M; Yoshida, T; Hayasaka, T; Mori, D; Imai, Y; Matsuki, N; Ishikawa, T; Yamaguchi, T
2009-01-01
An effective way for preventing injuries and diseases among the elderly is to monitor their daily lives. In this regard, we propose the use of a "Hyper Hospital Network", which is an information support system for elderly people and patients. In the current study, we developed a wearable system for monitoring electromyography (EMG) and acceleration using the Hyper Hospital Network plan. The current system is an upgraded version of our previous system for gait analysis (Yoshida et al. [13], Telemedicine and e-Health 13 703-714), and lets us monitor decreases in exercise and the presence of a hemiplegic gait more accurately. To clarify the capabilities and reliability of the system, we performed three experimental evaluations: one to verify the performance of the wearable system, a second to detect a hemiplegic gait, and a third to monitor EMG and accelerations simultaneously. Our system successfully detected a lack of exercise by monitoring the iEMG in healthy volunteers. Moreover, by using EMG and acceleration signals simultaneously, the reliability of the Hampering Index (HI) for detecting hemiplegia walking was improved significantly. The present study provides useful knowledge for the development of a wearable computer designed to monitor the physical conditions of older persons and patients.
A novel platform to study magnetized high-velocity collisionless shocks
Higginson, D. P.; Korneev, Ph; Béard, J.; ...
2014-12-13
An experimental platform to study the interaction of two colliding high-velocity (0.01–0.2c; 0.05–20 MeV) proton plasmas in a high strength (20 T) magnetic field is introduced. This platform aims to study the collision of magnetized plasmas accelerated via the Target-Normal-Sheath-Acceleration mechanism and initially separated by distances of a few hundred microns. The plasmas are accelerated from solid targets positioned inside a few cubic millimeter cavity located within a Helmholtz coil that provides up to 20 T magnetic fields. Various parameters of the plasmas at their interaction location are estimated. These show an interaction that is highly non-collisional, and that becomesmore » more and more dominated by the magnetic fields as time progresses (from 5 to 60 ps). Particle-in-cell simulations are used to reproduce the initial acceleration of the plasma both via simulations including the laser interaction and via simulations that start with preheated electrons (to save dramatically on computational expense). The benchmarking of such simulations with the experiment and with each other will be used to understand the physical interaction when a magnetic field is applied. In conclusion, the experimental density profile of the interacting plasmas is shown in the case without an applied magnetic magnetic field, so to show that without an applied field that the development of high-velocity shocks, as a result of particle-to-particle collisions, is not achievable in the configuration considered.« less
A novel platform to study magnetized high-velocity collisionless shocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higginson, D. P.; Korneev, Ph; Béard, J.
An experimental platform to study the interaction of two colliding high-velocity (0.01–0.2c; 0.05–20 MeV) proton plasmas in a high strength (20 T) magnetic field is introduced. This platform aims to study the collision of magnetized plasmas accelerated via the Target-Normal-Sheath-Acceleration mechanism and initially separated by distances of a few hundred microns. The plasmas are accelerated from solid targets positioned inside a few cubic millimeter cavity located within a Helmholtz coil that provides up to 20 T magnetic fields. Various parameters of the plasmas at their interaction location are estimated. These show an interaction that is highly non-collisional, and that becomesmore » more and more dominated by the magnetic fields as time progresses (from 5 to 60 ps). Particle-in-cell simulations are used to reproduce the initial acceleration of the plasma both via simulations including the laser interaction and via simulations that start with preheated electrons (to save dramatically on computational expense). The benchmarking of such simulations with the experiment and with each other will be used to understand the physical interaction when a magnetic field is applied. In conclusion, the experimental density profile of the interacting plasmas is shown in the case without an applied magnetic magnetic field, so to show that without an applied field that the development of high-velocity shocks, as a result of particle-to-particle collisions, is not achievable in the configuration considered.« less
The Science Training Program for Young Italian Physicists and Engineers at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barzi, Emanuela; Bellettini, Giorgio; Donati, Simone
2015-03-12
Since 1984 Fermilab has been hosting a two-month summer training program for selected undergraduate and graduate Italian students in physics and engineering. Building on the traditional close collaboration between the Italian National Institute of Nuclear Physics (INFN) and Fermilab, the program is supported by INFN, by the DOE and by the Scuola Superiore di Sant`Anna of Pisa (SSSA), and is run by the Cultural Association of Italians at Fermilab (CAIF). This year the University of Pisa has qualified it as a “University of Pisa Summer School”, and will grant successful students with European Supplementary Credits. Physics students join the Fermilabmore » HEP research groups, while engineers join the Particle Physics, Accelerator, Technical, and Computing Divisions. Some students have also been sent to other U.S. laboratories and universities for special trainings. The programs cover topics of great interest for science and for social applications in general, like advanced computing, distributed data analysis, nanoelectronics, particle detectors for earth and space experiments, high precision mechanics, applied superconductivity. In the years, over 350 students have been trained and are now employed in the most diverse fields in Italy, Europe, and the U.S. In addition, the existing Laurea Program in Fermilab Technical Division was extended to the whole laboratory, with presently two students in Master’s thesis programs on neutrino physics and detectors in the Neutrino Division. And finally, a joint venture with the Italian Scientists and Scholars North-America Foundation (ISSNAF) provided this year 4 professional engineers free of charge for Fermilab. More details on all of the above can be found below.« less
Ravignani, Andrea; Olivera, Vicente Matellán; Gingras, Bruno; Hofer, Riccardo; Hernández, Carlos Rodríguez; Sonnweber, Ruth-Sophie; Fitch, W. Tecumseh
2013-01-01
The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments. PMID:23912427
Ravignani, Andrea; Matellán Olivera, Vicente; Gingras, Bruno; Hofer, Riccardo; Rodríguez Hernández, Carlos; Sonnweber, Ruth-Sophie; Fitch, W Tecumseh
2013-07-31
The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.
GPU-based Branchless Distance-Driven Projection and Backprojection
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-01-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sefkow, Adam B.; Bennett, Guy R.
2010-09-01
Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: asmore » the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.« less
GPU-based Branchless Distance-Driven Projection and Backprojection.
Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong
2017-12-01
Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.
Experimental and numerical investigation of reactive shock-accelerated flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonazza, Riccardo
2016-12-20
The main goal of this program was to establish a qualitative and quantitative connection, based on the appropriate dimensionless parameters and scaling laws, between shock-induced distortion of astrophysical plasma density clumps and their earthbound analog in a shock tube. These objectives were pursued by carrying out laboratory experiments and numerical simulations to study the evolution of two gas bubbles accelerated by planar shock waves and compare the results to available astrophysical observations. The experiments were carried out in an vertical, downward-firing shock tube, 9.2 m long, with square internal cross section (25×25 cm 2). Specific goals were to quantify themore » effect of the shock strength (Mach number, M) and the density contrast between the bubble gas and its surroundings (usually quantified by the Atwood number, i.e. the dimensionless density difference between the two gases) upon some of the most important flow features (e.g. macroscopic properties; turbulence and mixing rates). The computational component of the work performed through this program was aimed at (a) studying the physics of multi-phase compressible flows in the context of astrophysics plasmas and (b) providing a computational connection between laboratory experiments and the astrophysical application of shock-bubble interactions. Throughout the study, we used the FLASH4.2 code to run hydrodynamical and magnetohydrodynamical simulations of shock bubble interactions on an adaptive mesh.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Wilcox, S.
This report was part of a multiyear collaboration with the University of Wisconsin and the National Oceanic and Atmospheric Administration (NOAA) to produce high-quality, satellite-based, solar resource datasets for the United States. High-quality, solar resource assessment accelerates technology deployment by making a positive impact on decision making and reducing uncertainty in investment decisions. Satellite-based solar resource datasets are used as a primary source in solar resource assessment. This is mainly because satellites provide larger areal coverage and longer periods of record than ground-based measurements. With the advent of newer satellites with increased information content and faster computers that can processmore » increasingly higher data volumes, methods that were considered too computationally intensive are now feasible. One class of sophisticated methods for retrieving solar resource information from satellites is a two-step, physics-based method that computes cloud properties and uses the information in a radiative transfer model to compute solar radiation. This method has the advantage of adding additional information as satellites with newer channels come on board. This report evaluates the two-step method developed at NOAA and adapted for solar resource assessment for renewable energy with the goal of identifying areas that can be improved in the future.« less
GPU-based acceleration of computations in nonlinear finite element deformation analysis.
Mafi, Ramin; Sirouspour, Shahin
2014-03-01
The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.
Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids
NASA Astrophysics Data System (ADS)
Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu
2013-01-01
Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.
A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation
NASA Astrophysics Data System (ADS)
da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille
2012-03-01
Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.
GPU acceleration of Dock6's Amber scoring computation.
Yang, Hailong; Zhou, Qiongqiong; Li, Bo; Wang, Yongjian; Luan, Zhongzhi; Qian, Depei; Li, Hanlu
2010-01-01
Dressing the problem of virtual screening is a long-term goal in the drug discovery field, which if properly solved, can significantly shorten new drugs' R&D cycle. The scoring functionality that evaluates the fitness of the docking result is one of the major challenges in virtual screening. In general, scoring functionality in docking requires a large amount of floating-point calculations, which usually takes several weeks or even months to be finished. This time-consuming procedure is unacceptable, especially when highly fatal and infectious virus arises such as SARS and H1N1, which forces the scoring task to be done in a limited time. This paper presents how to leverage the computational power of GPU to accelerate Dock6's (http://dock.compbio.ucsf.edu/DOCK_6/) Amber (J. Comput. Chem. 25: 1157-1174, 2004) scoring with NVIDIA CUDA (NVIDIA Corporation Technical Staff, Compute Unified Device Architecture - Programming Guide, NVIDIA Corporation, 2008) (Compute Unified Device Architecture) platform. We also discuss many factors that will greatly influence the performance after porting the Amber scoring to GPU, including thread management, data transfer, and divergence hidden. Our experiments show that the GPU-accelerated Amber scoring achieves a 6.5× speedup with respect to the original version running on AMD dual-core CPU for the same problem size. This acceleration makes the Amber scoring more competitive and efficient for large-scale virtual screening problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perlmutter, Saul
2012-01-13
The Department of Energy (DOE) hosted an event Friday, January 13, with 2011 Physics Nobel Laureate Saul Perlmutter. Dr. Perlmutter, a physicist at the Department’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, won the 2011 Nobel Prize in Physics “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae.” DOE’s Office of Science has supported Dr. Perlmutter’s research at Berkeley Lab since 1983. After the introduction from Secretary of Energy Steven Chu, Dr. Perlmutter delivered a presentation entitled "Supernovae, Dark Energy and the Accelerating Universe: Howmore » DOE Helped to Win (yet another) Nobel Prize." [Copied with editing from DOE Media Advisory issued January 10th, found at http://energy.gov/articles/energy-department-host-event-2011-physics-nobel-laureate-saul-perlmutter]« less
Calude, Cristian S; Păun, Gheorghe
2004-11-01
Are there 'biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speed-up methods which are based on non-determinism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speeding-up the communication channels. Consequently, two acceleration postulates of biological inspiration are introduced; each of them poses specific questions to biology. Finally, in a more speculative part of the paper, we will deal with Turing non-computability activity of the brain and possible forms of (extraterrestrial) intelligence.
Acceleration of FDTD mode solver by high-performance computing techniques.
Han, Lin; Xi, Yanping; Huang, Wei-Ping
2010-06-21
A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.
Hansen, Kirk; Dau, Nathan; Feist, Florian; Deck, Caroline; Willinger, Rémy; Madey, Steven M.; Bottlang, Michael
2013-01-01
Angular acceleration of the head is a known cause of traumatic brain injury (TBI), but contemporary bicycle helmets lack dedicated mechanisms to mitigate angular acceleration. A novel Angular Impact Mitigation (AIM) system for bicycle helmets has been developed that employs an elastically suspended aluminum honeycomb liner to absorb linear acceleration in normal impacts as well as angular acceleration in oblique impacts. This study tested bicycle helmets with and without AIM technology to comparatively assess impact mitigation. Normal impact tests were performed to measure linear head acceleration. Oblique impact tests were performed to measure angular head acceleration and neck loading. Furthermore, acceleration histories of oblique impacts were analyzed in a computational head model to predict the resulting risk of TBI in the form of concussion and diffuse axonal injury (DAI). Compared to standard helmets, AIM helmets resulted in a 14% reduction in peak linear acceleration (p < 0.001), a 34% reduction in peak angular acceleration (p < 0.001), and a 22% to 32% reduction in neck loading (p < 0.001). Computational results predicted that AIM helmets reduced the risk of concussion and DAI by 27% and 44%, respectively. In conclusion, these results demonstrated that AIM technology could effectively improve impact mitigation compared to a contemporary expanded polystyrene-based bicycle helmet, and may enhance prevention of bicycle-related TBI. Further research is required. PMID:23770518
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul J., E-mail: turinsky@ncsu.edu; Kothe, Douglas B., E-mail: kothe@ornl.gov
The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear powermore » industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M&S capabilities, which is in progress, will assist in addressing long-standing and future operational and safety challenges of the nuclear industry. - Highlights: • Complexity of physics based modeling of light water reactor cores being addressed. • Capability developed to help address problems that have challenged the nuclear power industry. • Simulation capabilities that take advantage of high performance computing developed.« less
Accelerated Reader. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2009
2009-01-01
"Accelerated Reader" is a computer-based reading management system designed to complement an existing classroom literacy program for grades pre-K-12. It is designed to increase the amount of time students spend reading independently. Students choose reading-level appropriate books or short stories for which Accelerated Reader tests are…
Computer generated hologram from point cloud using graphics processor.
Chen, Rick H-Y; Wilkinson, Timothy D
2009-12-20
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum. We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologram plane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique.
On the physics of waves in the solar atmosphere: Wave heating and wind acceleration
NASA Technical Reports Server (NTRS)
Musielak, Z. E.
1994-01-01
This paper presents work performed on the generation and physics of acoustic waves in the solar atmosphere. The investigators have incorporated spatial and temporal turbulent energy spectra in a newly corrected version of the Lighthill-Stein theory of acoustic wave generation in order to calculate the acoustic wave energy fluxes generated in the solar convective zone. The investigators have also revised and improved the treatment of the generation of magnetic flux tube waves, which can carry energy along the tubes far away from the region of their origin, and have calculated the tube wave energy fluxes for the sun. They also examine the transfer of the wave energy originated in the solar convective zone to the outer atmospheric layers through computation of wave propagation and dissipation in highly nonhomogeneous solar atmosphere. These waves may efficiently heat the solar atmosphere and the heating will be especially significant in the chromospheric network. It is also shown that the role played by Alfven waves in solar wind acceleration and coronal hole heating is dominant. The second part of the project concerned investigation of wave propagation in highly inhomogeneous stellar atmospheres using an approach based on an analytic tool developed by Musielak, Fontenla, and Moore. In addition, a new technique based on Dirac equations has been developed to investigate coupling between different MHD waves propagating in stratified stellar atmospheres.