Science.gov

Sample records for accelerator physics code

  1. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    SciTech Connect

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  2. Accelerator Physics Code Web Repository

    SciTech Connect

    Zimmermann, F.; Basset, R.; Bellodi, G.; Benedetto, E.; Dorda, U.; Giovannozzi, M.; Papaphilippou, Y.; Pieloni, T.; Ruggiero, F.; Rumolo, G.; Schmidt, F.; Todesco, E.; Zotter, B.W.; Payet, J.; Bartolini, R.; Farvacque, L.; Sen, T.; Chin, Y.H.; Ohmi, K.; Oide, K.; Furman, M.; /LBL, Berkeley /Oak Ridge /Pohang Accelerator Lab. /SLAC /TRIUMF /Tech-X, Boulder /UC, San Diego /Darmstadt, GSI /Rutherford /Brookhaven

    2006-10-24

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  3. Applications of the ARGUS code in accelerator physics

    SciTech Connect

    Petillo, J.J.; Mankofsky, A.; Krueger, W.A.; Kostas, C.; Mondelli, A.A.; Drobot, A.T.

    1993-12-31

    ARGUS is a three-dimensional, electromagnetic, particle-in-cell (PIC) simulation code that is being distributed to U.S. accelerator laboratories in collaboration between SAIC and the Los Alamos Accelerator Code Group. It uses a modular architecture that allows multiple physics modules to share common utilities for grid and structure input., memory management, disk I/O, and diagnostics, Physics modules are in place for electrostatic and electromagnetic field solutions., frequency-domain (eigenvalue) solutions, time- dependent PIC, and steady-state PIC simulations. All of the modules are implemented with a domain-decomposition architecture that allows large problems to be broken up into pieces that fit in core and that facilitates the adaptation of ARGUS for parallel processing ARGUS operates on either Cray or workstation platforms, and MOTIF-based user interface is available for X-windows terminals. Applications of ARGUS in accelerator physics and design are described in this paper.

  4. Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics

    SciTech Connect

    Battistoni, Giuseppe; Broggi, Francesco; Brugger, Markus; Campanella, Mauro; Carboni, Massimo; Empl, Anton; Fasso, Alberto; Gadioli, Ettore; Cerutti, Francesco; Ferrari, Alfredo; Ferrari, Anna; Lantz, Matthias; Mairani, Andrea; Margiotta, M.; Morone, Christina; Muraro, Silvia; Parodi, Katerina; Patera, Vincenzo; Pelliccioni, Maurizio; Pinsky, Lawrence; Ranft, Johannes; /Siegen U. /CERN /Seibersdorf, Reaktorzentrum /INFN, Milan /Milan U. /SLAC /INFN, Legnaro /INFN, Bologna /Bologna U. /CERN /HITS, Heidelberg /CERN /CERN /Frascati /CERN /CERN /CERN /CERN /NASA, Houston

    2012-04-17

    FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such topics as accelerator related applications.

  5. Physics models in the MARS15 code for accelerator and space applications.

    SciTech Connect

    Mokhov, N. V.; Gudima, K. K.; Mashnik, S. G.; Rakhno, I. L.; Sierk, A. J.; Striganov, S.

    2004-01-01

    The MARS code system, developed over 30 years, is a set of Monte Carlo programs for detailed simulation of hadronic and electromagnetic cascades in an arbitrary geometry of accelerator, detector and spacecraft components with particle energy ranging from a fraction of an electron volt up to 100 TeV. The new MARS15 (2004) version is described with an emphasis on modeling physics processes. This includes an extended list of elementary particles and arbitrary heavy ions, their interaction cross-sections, inclusive and exclusive nuclear event generators, photo - hadron production, correlated ionization energy loss and multiple Coulomb scattering, nuclide production and residual activation, and radiation damage (DPA). In particular, the details of a new model for leading baryon production and implementation of advanced versions of the Cascade-Exciton Model (CEM03), and the Los Alamos version of Quark-Gluon String Model (LAQGSM03) are given. The applications that are motivating these developments, needs for better nuclear data, and future physics improvements are described.

  6. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    SciTech Connect

    Courau, T.; Plagne, L.; Ponicot, A.; Sjoden, G.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)

  7. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  8. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    SciTech Connect

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H

    2016-01-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  9. Code comparison for accelerator design and analysis

    SciTech Connect

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary in these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.

  10. The Los Alamos accelerator code group

    SciTech Connect

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-05-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG`s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.

  11. VLHC accelerator physics

    SciTech Connect

    Michael Blaskiewicz et al.

    2001-11-01

    A six-month design study for a future high energy hadron collider was initiated by the Fermilab director in October 2000. The request was to study a staged approach where a large circumference tunnel is built that initially would house a low field ({approx}2 T) collider with center-of-mass energy greater than 30 TeV and a peak (initial) luminosity of 10{sup 34} cm{sup -2}s{sup -1}. The tunnel was to be scoped, however, to support a future upgrade to a center-of-mass energy greater than 150 TeV with a peak luminosity of 2 x 10{sup 34} cm{sup -2} sec{sup -1} using high field ({approx} 10 T) superconducting magnet technology. In a collaboration with Brookhaven National Laboratory and Lawrence Berkeley National Laboratory, a report of the Design Study was produced by Fermilab in June 2001. 1 The Design Study focused on a Stage 1, 20 x 20 TeV collider using a 2-in-1 transmission line magnet and leads to a Stage 2, 87.5 x 87.5 TeV collider using 10 T Nb{sub 3}Sn magnet technology. The article that follows is a compilation of accelerator physics designs and computational results which contributed to the Design Study. Many of the parameters found in this report evolved during the study, and thus slight differences between this text and the Design Study report can be found. The present text, however, presents the major accelerator physics issues of the Very Large Hadron Collider as examined by the Design Study collaboration and provides a basis for discussion and further studies of VLHC accelerator parameters and design philosophies.

  12. Computational Accelerator Physics Working Group Summary

    SciTech Connect

    Cary, John R.; Bohn, Courtlandt L.

    2004-08-27

    The working group on computational accelerator physics at the 11th Advanced Accelerator Concepts Workshop held a series of meetings during the Workshop. Verification, i.e., showing that a computational application correctly solves the assumed model, and validation, i.e., showing that the model correctly describes the modeled system, were discussed for a number of systems. In particular, the predictions of the massively parallel codes, OSIRIS and VORPAL, used for modeling advanced accelerator concepts, were compared and shown to agree, thereby establishing some verification of both codes. In addition, a number of talks on the status and frontiers of computational accelerator physics were presented, to include the modeling of ultrahigh-brightness electron photoinjectors and the physics of beam halo production. Finally, talks discussing computational needs were presented.

  13. Computational Accelerator Physics Working Group Summary

    SciTech Connect

    Cary, John R.; Bohn, Courtlandt L.

    2004-12-07

    The working group on computational accelerator physics at the 11th Advanced Accelerator Concepts Workshop held a series of meetings during the Workshop. Verification, i.e., showing that a computational application correctly solves the assumed model, and validation, i.e., showing that the model correctly describes the modeled system, were discussed for a number of systems. In particular, the predictions of the massively parallel codes, OSIRIS and VORPAL, used for modeling advanced accelerator concepts, were compared and shown to agree, thereby establishing some verification of both codes. In addition, a number of talks on the status and frontiers of computational accelerator physics were presented, to include the modeling of ultrahigh-brightness electron photoinjectors and the physics of beam halo production. Finally, talks discussing computational needs were presented.

  14. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  15. Code generation of RHIC accelerator device objects

    SciTech Connect

    Olsen, R.H.; Hoff, L.; Clifford, T.

    1995-12-01

    A RHIC Accelerator Device Object is an abstraction which provides a software view of a collection of collider control points known as parameters. A grammar has been defined which allows these parameters, along with code describing methods for acquiring and modifying them, to be specified efficiently in compact definition files. These definition files are processed to produce C++ source code. This source code is compiled to produce an object file which can be loaded into a front end computer. Each loaded object serves as an Accelerator Device Object class definition. The collider will be controlled by applications which set and get the parameters in instances of these classes using a suite of interface routines. Significant features of the grammar are described with details about the generated C++ code.

  16. Mapa-an object oriented code with a graphical user interface for accelerator design and analysis

    SciTech Connect

    Shasharina, S.G.; Cary, J.R.

    1997-02-01

    We developed a code for accelerator modeling which will allow users to create and analyze accelerators through a graphical user interface (GUI). The GUI can read an accelerator from files or create it by adding, removing and changing elements. It also creates 4D orbits and lifetime plots. The code includes a set of accelerator elements classes, C++ utility and GUI libraries. Due to the GUI, the code is easy to use and expand. {copyright} {ital 1997 American Institute of Physics.}

  17. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  18. Accelerator physics and modeling: Proceedings

    SciTech Connect

    Parsa, Z.

    1991-01-01

    This report contains papers on the following topics: Physics of high brightness beams; radio frequency beam conditioner for fast-wave free-electron generators of coherent radiation; wake-field and space-charge effects on high brightness beams. Calculations and measured results for BNL-ATF; non-linear orbit theory and accelerator design; general problems of modeling for accelerators; development and application of dispersive soft ferrite models for time-domain simulation; and bunch lengthening in the SLC damping rings.

  19. Accelerator physics and modeling: Proceedings

    SciTech Connect

    Parsa, Z.

    1991-12-31

    This report contains papers on the following topics: Physics of high brightness beams; radio frequency beam conditioner for fast-wave free-electron generators of coherent radiation; wake-field and space-charge effects on high brightness beams. Calculations and measured results for BNL-ATF; non-linear orbit theory and accelerator design; general problems of modeling for accelerators; development and application of dispersive soft ferrite models for time-domain simulation; and bunch lengthening in the SLC damping rings.

  20. Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams

    SciTech Connect

    Siemann, R.H.; /SLAC

    2011-10-24

    Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.

  1. Accelerator science in medical physics.

    PubMed

    Peach, K; Wilson, P; Jones, B

    2011-12-01

    The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future.

  2. Accelerator science in medical physics

    PubMed Central

    Peach, K; Wilson, P; Jones, B

    2011-01-01

    The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future. PMID:22374548

  3. Compensation Techniques in Accelerator Physics

    SciTech Connect

    Sayed, Hisham Kamal

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Two problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.

  4. Analytical tools in accelerator physics

    SciTech Connect

    Litvinenko, V.N.

    2010-09-01

    This paper is a sub-set of my lectures presented in the Accelerator Physics course (USPAS, Santa Rosa, California, January 14-25, 2008). It is based on my notes I wrote during period from 1976 to 1979 in Novosibirsk. Only few copies (in Russian) were distributed to my colleagues in Novosibirsk Institute of Nuclear Physics. The goal of these notes is a complete description starting from the arbitrary reference orbit, explicit expressions for 4-potential and accelerator Hamiltonian and finishing with parameterization with action and angle variables. To a large degree follow logic developed in Theory of Cyclic Particle Accelerators by A.A.Kolmensky and A.N.Lebedev [Kolomensky], but going beyond the book in a number of directions. One of unusual feature is these notes use of matrix function and Sylvester formula for calculating matrices of arbitrary elements. Teaching the USPAS course motivated me to translate significant part of my notes into the English. I also included some introductory materials following Classical Theory of Fields by L.D. Landau and E.M. Liftsitz [Landau]. A large number of short notes covering various techniques are placed in the Appendices.

  5. GeoPhysical Analysis Code

    SciTech Connect

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.

  6. GeoPhysical Analysis Code

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problemsmore » and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.« less

  7. Accelerator Physics Working Group Summary

    NASA Astrophysics Data System (ADS)

    Li, D.; Uesugi, T.; Wildnerc, E.

    2010-03-01

    The Accelerator Physics Working Group addressed the worldwide R&D activities performed in support of future neutrino facilities. These studies cover R&D activities for Super Beam, Beta Beam and muon-based Neutrino Factory facilities. Beta Beam activities reported the important progress made, together with the research activity planned for the coming years. Discussion sessions were also organized jointly with other working groups in order to define common ground for the optimization of a future neutrino facility. Lessons learned from already operating neutrino facilities provide key information for the design of any future neutrino facility, and were also discussed in this meeting. Radiation damage, remote handling for equipment maintenance and exchange, and primary proton beam stability and monitoring were among the important subjects presented and discussed. Status reports for each of the facility subsystems were presented: proton drivers, targets, capture systems, and muon cooling and acceleration systems. The preferred scenario for each type of possible future facility was presented, together with the challenges and remaining issues. The baseline specification for the muon-based Neutrino Factory was reviewed and updated where required. This report will emphasize new results and ideas and discuss possible changes in the baseline scenarios of the facilities. A list of possible future steps is proposed that should be followed up at NuFact10.

  8. Acceleration techniques for explicit Euler codes

    NASA Astrophysics Data System (ADS)

    Tai, Chang Hsien

    Two steps in the acceleration of Euler computations to steady solutions are studied: (1) using full multi-grid to march from an arbitrary initial guess to within the range of attraction of the steady solution; and (2) using vector-sequencing to converge to the steady solution from a nearby state. Regarding the first step, in order to design schemes that combine well with multi-grid acceleration, a method was developed for designing optimally smoothing multi-stage time-marching schemes, given any spatial-differencing operator. The analysis was extended to the Euler and Navier-Stokes equations in one space-dimension by use of characteristic time-stepping. Convergence rates independent of the number of cells in the finest grid were achieved with these optimal schemes, for transonic flow with and without a shock. Besides characteristic time-stepping, local time-stepping was tested with these schemes. Good convergence was obtained with local time-stepping. Finally, the analysis was extended to scalar, Burgers, and Euler equations in two space dimensions. The successful application to multi-dimensional scalar equations turns out to depend on the possibility of damping numerical signals that move normal to the physical transport direction. Several techniques that were tested do this. Regarding the second step, two vector-sequencing strategies, generalized minimum residuals (GMRES) and minimum polynomial extrapolation (MPE), which can quickly converge to the steady solution from a nearby state, were explored and applied to linear and nonlinear problems. The results obtained with GMRES and MPE in nested iterations suggest that there is an advantage in the combination of the multi-grid strategy with vector-sequencing ideas.

  9. GeoPhysical Analysis Code

    2012-12-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-written components: (i) a set of standard finite, discrete, and discontinuous displacement element physics solvers for resolving Darcy fluid flow, explicit mechanics, implicit mechanics, fault rupture and earthquake nucleation, and fluid-mediated fracturing, including resolution of physcial behaviors both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. Themore » resultant code can be used alone for linearly elastic problems; ploblems involving hydraulic fracturing, where the mesh topology is dynamically changed; fault rupture modeling and seismic risk assessment; and general granular materials behavior. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for , e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release. CPAC's secondary applications include modeling fault evolution for predicting the statistical distribution of earthquake events and to capture granular materials behavior under different load paths.« less

  10. GeoPhysical Analysis Code

    SciTech Connect

    2012-12-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-written components: (i) a set of standard finite, discrete, and discontinuous displacement element physics solvers for resolving Darcy fluid flow, explicit mechanics, implicit mechanics, fault rupture and earthquake nucleation, and fluid-mediated fracturing, including resolution of physcial behaviors both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems; ploblems involving hydraulic fracturing, where the mesh topology is dynamically changed; fault rupture modeling and seismic risk assessment; and general granular materials behavior. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for , e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release. CPAC's secondary applications include modeling fault evolution for predicting the statistical distribution of earthquake events and to capture granular materials behavior under different load paths.

  11. Nonlinear Krylov acceleration of reacting flow codes

    SciTech Connect

    Kumar, S.; Rawat, R.; Smith, P.; Pernice, M.

    1996-12-31

    We are working on computational simulations of three-dimensional reactive flows in applications encompassing a broad range of chemical engineering problems. Examples of such processes are coal (pulverized and fluidized bed) and gas combustion, petroleum processing (cracking), and metallurgical operations such as smelting. These simulations involve an interplay of various physical and chemical factors such as fluid dynamics with turbulence, convective and radiative heat transfer, multiphase effects such as fluid-particle and particle-particle interactions, and chemical reaction. The governing equations resulting from modeling these processes are highly nonlinear and strongly coupled, thereby rendering their solution by traditional iterative methods (such as nonlinear line Gauss-Seidel methods) very difficult and sometimes impossible. Hence we are exploring the use of nonlinear Krylov techniques (such as CMRES and Bi-CGSTAB) to accelerate and stabilize the existing solver. This strategy allows us to take advantage of the problem-definition capabilities of the existing solver. The overall approach amounts to using the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) method and its variants as nonlinear preconditioners for the nonlinear Krylov method. We have also adapted a backtracking approach for inexact Newton methods to damp the Newton step in the nonlinear Krylov method. This will be a report on work in progress. Preliminary results with nonlinear GMRES have been very encouraging: in many cases the number of line Gauss-Seidel sweeps has been reduced by about a factor of 5, and increased robustness of the underlying solver has also been observed.

  12. The Particle Accelerator Simulation Code PyORBIT

    SciTech Connect

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M; Shishlo, Andrei P

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.

  13. [Accelerator physics R&D

    SciTech Connect

    Krisch, A.D.

    1994-08-22

    This report discusses the NEPTUN-A experiment that will study spin effects in violent proton-proton collisions; the Siberian snake tests at IUCF cooler ring; polarized gas jets; and polarized proton acceleration to 1 TeV at Fermilab.

  14. Accelerator physics R and D

    NASA Astrophysics Data System (ADS)

    Krisch, A. D.

    1994-08-01

    This report discusses the NEPTUN-A experiment that will study spin effects in violent proton-proton collisions; the Siberian snake tests at IUCF cooler ring; polarized gas jets; and polarized proton acceleration to 1 TeV at Fermilab.

  15. Theoretical problems in accelerator physics. Progress report

    SciTech Connect

    Kroll, N.M.

    1993-08-01

    This report discusses the following topics in accelerator physics: radio frequency pulse compression and power transport; computational methods for the computer analysis of microwave components; persistent wakefields associated with waveguide damping of higher order modes; and photonic band gap cavities.

  16. GMRES acceleration of computational fluid dynamics codes

    NASA Technical Reports Server (NTRS)

    Wigton, L. B.; Yu, N. J.; Young, D. P.

    1985-01-01

    The generalized minimal residual algorithm (GMRES) is a conjugate-gradient like method that applies directly to nonsymmetric linear systems of equations. In this paper, GMRES is modified to handle nonlinear equations characteristic of computational fluid dynamics. Attention is devoted to the concept of preconditioning and the role it plays in assuring rapid convergence. A formulation is developed that allows GMRES to be preconditioned by the solution procedures already built into existing computer codes. Examples are provided that demonstrate the ability of GMRES to greatly improve the robustness and rate of convergence of current state-of-the-art fluid dynamics codes. Theoretical aspects of GMRES are presented that explain why it works. Finally, the advantage GMRES enjoys over related methods such as conjugate gradients are discussed.

  17. Physics and Accelerator Applications of RF Superconductivity

    SciTech Connect

    H. Padamsee; K. W. Shepard; Ron Sundelin

    1993-12-01

    A key component of any particle accelerator is the device that imparts energy gain to the charged particle. This is usually an electromagnetic cavity resonating at a microwave frequency, chosen between 100 and 3000 MHz. Serious attempts to utilize superconductors for accelerating cavities were initiated more than 25 years ago with the acceleration of electrons in a lead-plated resonator at Stanford University (1). The first full-scale accelerator, the Stanford SCA, was completed in 1978 at the High Energy Physics Laboratory (HEPL) (2). Over the intervening one and a half decades, superconducting cavities have become increasingly important to particle accelerators for nuclear physics and high energy physics. For continuous operation, as is required for many applications, the power dissipation in the walls of a copper structure is quite substantial, for example, 0.1 megawatts per meter of structure operating at an accelerating field of 1 million volts/meter (MV/m). since losses increase as the square of the accelerating field, copper cavities become severely uneconomical as demand for higher fields grows with the higher energies called for by experimenters to probe ever deeper into the structure of matter. Rf superconductivity has become an important technology for particle accelerators. Practical structures with attractive performance levels have been developed for a variety of applications, installed in the targeted accelerators, and operated over significant lengths of time. Substantial progress has been made in understanding field and Q limitations and in inventing cures to advance performance. The technical and economical potential of rf superconductivity makes it an important candidate for future advanced accelerators for free electron lasers, for nuclear physics, and for high energy physics, at the luminosity as well as at the energy frontiers.

  18. New accelerators in high-energy physics

    SciTech Connect

    Blewett, J.P.

    1982-01-01

    First, I should like to mention a few new ideas that have appeared during the last few years in the accelerator field. A couple are of importance in the design of injectors, usually linear accelerators, for high-energy machines. Then I shall review some of the somewhat sensational accelerator projects, now in operation, under construction or just being proposed. Finally, I propose to mention a few applications of high-energy accelerators in fields other than high-energy physics. I realize that this is a digression from my title but I hope that you will find it interesting.

  19. A Dual-Sided Coded-Aperture Radiation Detection System , Nuclear Instruments & Methods in Physics Research Section A-Accelerators Spectrometers Detectors and Associated Equipment

    SciTech Connect

    Ziock, Klaus-Peter; Fabris, Lorenzo

    2010-01-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5 x 5 x 50 cm{sup 3} cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  20. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  1. Non-accelerator particle physics

    SciTech Connect

    Steinberg, R.I.; Lane, C.E.

    1991-09-01

    The goals of this research are the experimental testing of fundamental theories of physics such as grand unification and the exploration of cosmic phenomena through the techniques of particle physics. We are working on the MACRO experiment, which employs a large area underground detector to search for grand unification magnetic monopoles and dark matter candidates and to study cosmic ray muons as well as low and high energy neutrinos: the {nu}IMB project, which seeks to refurbish and upgrade the IMB water Cerenkov detector to perform an improved proton decay search together with a long baseline reactor neutrino oscillation experiment using a kiloton liquid scintillator (the Perry experiment); and development of technology for improved liquid scintillators and for very low background materials in support of the MACRO and Perry experiments and for new solar neutrino experiments. 21 refs., 19 figs., 6 tabs.

  2. Advances in Parallel Electromagnetic Codes for Accelerator Science and Development

    SciTech Connect

    Ko, Kwok; Candel, Arno; Ge, Lixin; Kabel, Andreas; Lee, Rich; Li, Zenghai; Ng, Cho; Rawat, Vineet; Schussman, Greg; Xiao, Liling; /SLAC

    2011-02-07

    Over a decade of concerted effort in code development for accelerator applications has resulted in a new set of electromagnetic codes which are based on higher-order finite elements for superior geometry fidelity and better solution accuracy. SLAC's ACE3P code suite is designed to harness the power of massively parallel computers to tackle large complex problems with the increased memory and solve them at greater speed. The US DOE supports the computational science R&D under the SciDAC project to improve the scalability of ACE3P, and provides the high performance computing resources needed for the applications. This paper summarizes the advances in the ACE3P set of codes, explains the capabilities of the modules, and presents results from selected applications covering a range of problems in accelerator science and development important to the Office of Science.

  3. Accelerator-based validation of shielding codes

    SciTech Connect

    Zeitlin, Cary; Heilbronn, Lawrence; Miller, Jack; Wilson, John W.

    2002-08-12

    The space radiation environment poses risks to astronaut health from a diverse set of sources, ranging from low-energy protons and electrons to highly-charged, high-energy atomic nuclei and their associated fragmentation products, including neutrons. The low-energy protons and electrons are the source of most of the radiation dose to Shuttle and ISS crews, while the more energetic particles that comprise the Galactic Cosmic Radiation (protons, He, and heavier nuclei up to Fe) will be the dominant source for crews on long-duration missions outside the earth's magnetic field. Because of this diversity of sources, a broad ground-based experimental effort is required to validate the transport and shielding calculations used to predict doses and dose-equivalents under various mission scenarios. The experimental program of the LBNL group, described here, focuses principally on measurements of charged particle and neutron production in high-energy heavy-ion fragmentation. Other aspects of the program include measurements of the shielding provided by candidate spacesuit materials against low-energy protons (particularly relevant to extra-vehicular activities in low-earth orbit), and the depth-dose relations in tissue for higher-energy protons. The heavy-ion experiments are performed at the Brookhaven National Laboratory's Alternating Gradient Synchrotron and the Heavy-Ion Medical Accelerator in Chiba in Japan. Proton experiments are performed at the Lawrence Berkeley National Laboratory's 88'' Cyclotron with a 55 MeV beam, and at the Loma Linda University Proton Facility with 100 to 250 MeV beam energies. The experimental results are an important component of the overall shielding program, as they allow for simple, well-controlled tests of the models developed to handle the more complex radiation environment in space.

  4. Physical sputtering code for fusion applications

    SciTech Connect

    Smith, D.L.; Brooks, J.N.; Post, D.E.

    1981-10-01

    A computer code, DSPUT, has been developed to compute the physical sputtering yields for various plasma particles incident on candidate fusion-reactor first-wall materials. The code, which incorporates the energy and angular-dependence of the sputtering yield, treats both high- and low-Z incident particles bombarding high- and low-Z wall materials. The physical sputtering yield is expressed in terms of the atomic and mass numbers of the incident and target atoms, the surface binding energy of the wall materials, and the incident angle and energy of the particle. An auxiliary code has been written to provide sputtering yields for a Maxwellian-averaged incident particle flux. The code DSPUT has been used as part of a Monte Carlo code for analyzing plasma-wall interactions.

  5. Modeling of Ionization Physics with the PIC Code OSIRIS

    SciTech Connect

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  6. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  7. TOPICS IN THE PHYSICS OF PARTICLE ACCELERATORS

    SciTech Connect

    Sessler, A.M.

    1984-07-01

    High energy physics, perhaps more than any other branch of science, is driven by technology. It is not the development of theory, or consideration of what measurements to make, which are the driving elements in our science. Rather it is the development of new technology which is the pacing item. Thus it is the development of new techniques, new computers, and new materials which allows one to develop new detectors and new particle-handling devices. It is the latter, the accelerators, which are at the heart of the science. Without particle accelerators there would be, essentially, no high energy physics. In fact. the advances in high energy physics can be directly tied to the advances in particle accelerators. Looking terribly briefly, and restricting one's self to recent history, the Bevatron made possible the discovery of the anti-proton and many of the resonances, on the AGS was found the {mu}-neutrino, the J-particle and time reversal non-invariance, on Spear was found the {psi}-particle, and, within the last year the Z{sub 0} and W{sup {+-}} were seen on the CERN SPS p-{bar p} collider. Of course one could, and should, go on in much more detail with this survey, but I think there is no need. It is clear that as better acceleration techniques were developed more and more powerful machines were built which, as a result, allowed high energy physics to advance. What are these techniques? They are very sophisticated and ever-developing. The science is very extensive and many individuals devote their whole lives to accelerator physics. As high energy experimental physicists your professional lives will be dominated by the performance of 'the machine'; i.e. the accelerator. Primarily you will be frustrated by the fact that it doesn't perform better. Why not? In these lectures, six in all, you should receive some appreciation of accelerator physics. We cannot, nor do we attempt, to make you into accelerator physicists, but we do hope to give you some insight into the

  8. Design of a physical format coding system

    NASA Astrophysics Data System (ADS)

    Hu, Beibei; Pei, Jing; Zhang, Qicheng; Liu, Hailong; Tang, Yi

    2008-12-01

    A novel design of physical format coding system (PFCS) is presented based on Multi-level read-only memory disc (ML ROM) in order to solve the problem of low efficiency and long period of disc testing during system development. The PFCS is composed of four units, which are 'Encode', 'Add Noise', 'Decode', 'Error Rate', and 'Information'. It is developed with MFC under the environment of VC++ 6.0, and capable to visually simulate the procedure of data processing for ML ROM. This system can also be used for developing other optical disc storage system or similar channel coding system.

  9. HotSpot Health Physics Codes

    SciTech Connect

    Homann, S. G.

    2013-04-18

    The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating insidents involving redioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.

  10. HotSpot Health Physics Codes

    2010-03-02

    The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.

  11. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  12. HOTSPOT Health Physics codes for the PC

    SciTech Connect

    Homann, S.G.

    1994-03-01

    The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy`s ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections).

  13. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  14. Mapa-an object oriented code with a graphical user interface for accelerator design and analysis

    SciTech Connect

    Shasharina, Svetlana G.; Cary, John R.

    1997-02-01

    We developed a code for accelerator modeling which will allow users to create and analyze accelerators through a graphical user interface (GUI). The GUI can read an accelerator from files or create it by adding, removing and changing elements. It also creates 4D orbits and lifetime plots. The code includes a set of accelerator elements classes, C++ utility and GUI libraries. Due to the GUI, the code is easy to use and expand.

  15. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  16. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas.more » At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.« less

  17. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    SciTech Connect

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  18. Pulsed power accelerator for material physics experiments

    NASA Astrophysics Data System (ADS)

    Reisman, D. B.; Stoltzfus, B. S.; Stygar, W. A.; Austin, K. N.; Waisman, E. M.; Hickman, R. J.; Davis, J.-P.; Haill, T. A.; Knudson, M. D.; Seagle, C. T.; Brown, J. L.; Goerz, D. A.; Spielman, R. B.; Goldlust, J. A.; Cravey, W. R.

    2015-09-01

    We have developed the design of Thor: a pulsed power accelerator that delivers a precisely shaped current pulse with a peak value as high as 7 MA to a strip-line load. The peak magnetic pressure achieved within a 1-cm-wide load is as high as 100 GPa. Thor is powered by as many as 288 decoupled and transit-time isolated bricks. Each brick consists of a single switch and two capacitors connected electrically in series. The bricks can be individually triggered to achieve a high degree of current pulse tailoring. Because the accelerator is impedance matched throughout, capacitor energy is delivered to the strip-line load with an efficiency as high as 50%. We used an iterative finite element method (FEM), circuit, and magnetohydrodynamic simulations to develop an optimized accelerator design. When powered by 96 bricks, Thor delivers as much as 4.1 MA to a load, and achieves peak magnetic pressures as high as 65 GPa. When powered by 288 bricks, Thor delivers as much as 6.9 MA to a load, and achieves magnetic pressures as high as 170 GPa. We have developed an algebraic calculational procedure that uses the single brick basis function to determine the brick-triggering sequence necessary to generate a highly tailored current pulse time history for shockless loading of samples. Thor will drive a wide variety of magnetically driven shockless ramp compression, shockless flyer plate, shock-ramp, equation of state, material strength, phase transition, and other advanced material physics experiments.

  19. Tevatron accelerator physics and operation highlights

    SciTech Connect

    Valishev, A.; /Fermilab

    2011-03-01

    The performance of the Tevatron collider demonstrated continuous growth over the course of Run II, with the peak luminosity reaching 4 x 10{sup 32} cm{sup -2} s{sup -1}, and the weekly integration rate exceeding 70 pb{sup -1}. This report presents a review of the most important advances that contributed to this performance improvement, including beam dynamics modeling, precision optics measurements and stability control, implementation of collimation during low-beta squeeze. Algorithms employed for optimization of the luminosity integration are presented and the lessons learned from high-luminosity operation are discussed. Studies of novel accelerator physics concepts at the Tevatron are described, such as the collimation techniques using crystal collimator and hollow electron beam, and compensation of beam-beam effects.

  20. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    SciTech Connect

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  1. The physics of the FLUKA code: recent developments

    NASA Astrophysics Data System (ADS)

    Sala, P. R.; Fluka Collaboration

    FLUKA is a Monte Carlo code able to simulate interaction and transport of hadrons heavy ions and electromagnetic particles from few keV or thermal neutron to cosmic ray energies in whichever material It has proven capabilities in accelerator design and shielding ADS studies and experiments dosimetry and hadrontherapy space radiation and cosmic ray shower studies in the atmosphere The highest priority in the design and development of the code has always been the implementation and improvement of sound and modern physical models A summary of the FLUKA physical models is given while recent developments are described in detail among the others extensions of the intermediate energy hadronic interaction generator improvements in the equilibrium stage of hadronic interactions refinements in photon cross sections and interaction models analytical on-line evolution of radio-activation and remnant dose In particular new developments in the nucleus-nucleus interaction models are discussed Comparisons with experimental data and examples of applications of relevance for space radition are also provided

  2. Advanced Computing Tools and Models for Accelerator Physics

    SciTech Connect

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  3. Guide to accelerator physics program SYNCH: VAX version 1987. 2

    SciTech Connect

    Parsa, Z.; Courant, E.

    1987-01-01

    This guide is written to accommodate users of Accelerator Physics Data Base BNLDAG::DUAO:(PARSA1). It describes the contents of the on line Accelerator Physics data base DUAO:(PARSA1.SYNCH). SYNCH is a computer program used for the design and analysis of synchrotrons, storage rings and beamlines.

  4. Physical activities to enhance an understanding of acceleration

    NASA Astrophysics Data System (ADS)

    Lee, S. A.

    2006-03-01

    On the basis of their everyday experiences, students have developed an understanding of many of the concepts of mechanics by the time they take their first physics course. However, an accurate understanding of acceleration remains elusive. Many students have difficulties distinguishing between velocity and acceleration. In this report, a set of physical activities to highlight the differences between acceleration and velocity are described. These activities involve running and walking on sand (such as an outdoor volleyball court).

  5. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    SciTech Connect

    Luo, Yun

    2015-08-29

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  6. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    DOE PAGES

    Luo, Yun

    2015-08-29

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  7. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    NASA Astrophysics Data System (ADS)

    Luo, Yun

    2015-11-01

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam-beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam-beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam-beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this paper, I will present the code architecture, physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.

  8. The Influence of Accelerator Science on Physics Research

    NASA Astrophysics Data System (ADS)

    Haussecker, Enzo F.; Chao, Alexander W.

    2011-06-01

    We evaluate accelerator science in the context of its contributions to the physics community. We address the problem of quantifying these contributions and present a scheme for a numerical evaluation of them. We show by using a statistical sample of important developments in modern physics that accelerator science has influenced 28% of post-1938 physicists and also 28% of post-1938 physics research. We also examine how the influence of accelerator science has evolved over time, and show that on average it has contributed to a physics Nobel Prize-winning research every 2.9 years.

  9. GPUPEGAS: A NEW GPU-ACCELERATED HYDRODYNAMIC CODE FOR NUMERICAL SIMULATIONS OF INTERACTING GALAXIES

    SciTech Connect

    Kulikov, Igor

    2014-09-01

    In this paper, a new scalable hydrodynamic code, GPUPEGAS (GPU-accelerated Performance Gas Astrophysical Simulation), for the simulation of interacting galaxies is proposed. The details of a parallel numerical method co-design are described. A speed-up of 55 times was obtained within a single GPU accelerator. The use of 60 GPU accelerators resulted in 96% parallel efficiency. A collisionless hydrodynamic approach has been used for modeling of stars and dark matter. The scalability of the GPUPEGAS code is shown.

  10. Fifty years of accelerator based physics at Chalk River

    SciTech Connect

    McKay, John W.

    1999-04-26

    The Chalk River Laboratories of Atomic Energy of Canada Ltd. was a major centre for Accelerator based physics for the last fifty years. As early as 1946, nuclear structure studies were started on Cockroft-Walton accelerators. A series of accelerators followed, including the world's first Tandem, and the MP Tandem, Superconducting Cyclotron (TASCC) facility that was opened in 1986. The nuclear physics program was shut down in 1996. This paper will describe some of the highlights of the accelerators and the research of the laboratory.

  11. Optimizing Nuclear Physics Codes on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Nam, Hai Ah

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  12. Neutrino physics with accelerator driven subcritical reactors

    NASA Astrophysics Data System (ADS)

    Ciuffoli, Emilio; Evslin, Jarah; Zhao, Fengyi

    2016-01-01

    Accelerator driven system (ADS) subcritical nuclear reactors are under development around the world. They will be intense sources of free, 30-55 MeV μ + decay at rest {overline{ν}}_{μ } . These ADS reactor neutrinos can provide a robust test of the LSND anomaly and a precise measurement of the leptonic CP-violating phase δ, including sign(cos(δ)). The first phase of many ADS programs includes the construction of a low energy, high intensity proton or deuteron accelerator, which can yield competitive bounds on sterile neutrinos.

  13. Investigation of Beam-RF Interactions in Twisted Waveguide Accelerating Structures Using Beam Tracking Codes

    SciTech Connect

    Holmes, Jeffrey A; Zhang, Yan; Kang, Yoon W; Galambos, John D; Hassan, Mohamed H; Wilson, Joshua L

    2009-01-01

    Investigations of the RF properties of certain twisted waveguide structures show that they support favorable accelerating fields. This makes them potential candidates for accelerating cavities. Using the particle tracking code, ORBIT, We examine the beam - RF interaction in the twisted cavity structures to understand their beam transport and acceleration properties. The results will show the distinctive properties of these new structures for particle transport and acceleration, which have not been previously analyzed.

  14. Fluid Physics Under a Stochastic Acceleration Field

    NASA Technical Reports Server (NTRS)

    Vinals, Jorge

    2001-01-01

    The research summarized in this report has involved a combined theoretical and computational study of fluid flow that results from the random acceleration environment present onboard space orbiters, also known as g-jitter. We have focused on a statistical description of the observed g-jitter, on the flows that such an acceleration field can induce in a number of experimental configurations of interest, and on extending previously developed methodology to boundary layer flows. Narrow band noise has been shown to describe many of the features of acceleration data collected during space missions. The scale of baroclinically induced flows when the driving acceleration is random is not given by the Rayleigh number. Spatially uniform g-jitter induces additional hydrodynamic forces among suspended particles in incompressible fluids. Stochastic modulation of the control parameter shifts the location of the onset of an oscillatory instability. Random vibration of solid boundaries leads to separation of boundary layers. Steady streaming ahead of a modulated solid-melt interface enhances solute transport, and modifies the stability boundaries of a planar front.

  15. Linear Collider Accelerator Physics Issues Regarding Alignment

    SciTech Connect

    Seeman, J.T.; /SLAC

    2005-08-12

    The next generation of linear colliders will require more stringent alignment tolerances than those for the SLC with regard to the accelerating structures, quadrupoles, and beam position monitors. New techniques must be developed to achieve these tolerances. A combination of mechanical-electrical and beam-based methods will likely be needed.

  16. Fluid Physics in a Fluctuating Acceleration Environment

    NASA Technical Reports Server (NTRS)

    Thomson, J. Ross; Drolet, Francois; Vinals, Jorge

    1996-01-01

    We summarize several aspects of an ongoing investigation of the effects that stochastic residual accelerations (g-jitter) onboard spacecraft can have on experiments conducted in a microgravity environment. The residual acceleration field is modeled as a narrow band noise, characterized by three independent parameters: intensity (g(exp 2)), dominant angular frequency Omega, and characteristic correlation time tau. Realistic values for these parameters are obtained from an analysis of acceleration data corresponding to the SL-J mission, as recorded by the SAMS instruments. We then use the model to address the random motion of a solid particle suspended in an incompressible fluid subjected to such random accelerations. As an extension, the effect of jitter on coarsening of a solid-liquid mixture is briefly discussed, and corrections to diffusion controlled coarsening evaluated. We conclude that jitter will not be significant in the experiment 'Coarsening of solid-liquid mixtures' to be conducted in microgravity. Finally, modifications to the location of onset of instability in systems driven by a random force are discussed by extending the standard reduction to the center manifold to the stochastic case. Results pertaining to time-modulated oscillatory convection are briefly discussed.

  17. Efficient modeling of plasma wakefield acceleration in quasi-non-linear-regimes with the hybrid code Architect

    NASA Astrophysics Data System (ADS)

    Marocchino, A.; Massimo, F.; Rossi, A. R.; Chiadroni, E.; Ferrario, M.

    2016-09-01

    In this paper we present a hybrid approach aiming to assess feasible plasma wakefield acceleration working points with reduced computation resources. The growing interest for plasma wakefield acceleration and especially the need to control with increasing precision the quality of the accelerated bunch demands for more accurate and faster simulations. Particle in cell codes are the state of the art technique to simulate the underlying physics, however the run-time represents the major drawback. Architect is a hybrid code that treats the bunch kinetically and the background electron plasma as a fluid, initialising bunches in vacuum so to take into account for the transition from vacuum to plasma. Architect solves directly the Maxwell's equations on a Yee lattice. Such an approach allows us to drastically reduce run time without loss of generality or accuracy up to the weakly non linear regime.

  18. Accelerating Innovation: How Nuclear Physics Benefits Us All

    DOE R&D Accomplishments Database

    2011-01-01

    Innovation has been accelerated by nuclear physics in the areas of improving our health; making the world safer; electricity, environment, archaeology; better computers; contributions to industry; and training the next generation of innovators.

  19. Physics of Laser-driven plasma-based acceleration

    SciTech Connect

    Esarey, Eric; Schroeder, Carl B.

    2003-06-30

    The physics of plasma-based accelerators driven by short-pulse lasers is reviewed. This includes the laser wake-field accelerator, the plasma beat wave accelerator, the self-modulated laser wake-field accelerator, and plasma waves driven by multiple laser pulses. The properties of linear and nonlinear plasma waves are discussed, as well as electron acceleration in plasma waves. Methods for injecting and trapping plasma electrons in plasma waves are also discussed. Limits to the electron energy gain are summarized, including laser pulse direction, electron dephasing, laser pulse energy depletion, as well as beam loading limitations. The basic physics of laser pulse evolution in underdense plasmas is also reviewed. This includes the propagation, self-focusing, and guiding of laser pulses in uniform plasmas and plasmas with preformed density channels. Instabilities relevant to intense short-pulse laser-plasma interactions, such as Raman, self-modulation, and hose instabilities, are discussed. Recent experimental results are summarized.

  20. Accelerator physics analysis with an integrated toolkit

    SciTech Connect

    Holt, J.A.; Michelotti, L.; Satogata, T.

    1992-08-01

    Work is in progress on an integrated software toolkit for linear and nonlinear accelerator design, analysis, and simulation. As a first application, beamline'' and MXYZPTLK'' (differential algebra) class libraries, were used with an X Windows graphics library to build an user-friendly, interactive phase space tracker which, additionally, finds periodic orbits. This program was used to analyse a theoretical lattice which contains octupoles and decapoles to find the 20th order, stable and unstable periodic orbits and to explore the local phase space structure.

  1. Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Marquart, Jed E.

    2005-01-01

    The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the

  2. SEP acceleration in CME driven shocks using a hybrid code

    SciTech Connect

    Gargaté, L.; Fonseca, R. A.; Silva, L. O.

    2014-09-01

    We perform hybrid simulations of a super-Alfvénic quasi-parallel shock, driven by a coronal mass ejection (CME), propagating in the outer coronal/solar wind at distances of between 3 to 6 solar radii. The hybrid treatment of the problem enables the study of the shock propagation on the ion timescale, preserving ion kinetics and allowing for a self-consistent treatment of the shock propagation and particle acceleration. The CME plasma drags the embedded magnetic field lines stretching from the sun, and propagates out into interplanetary space at a greater velocity than the in situ solar wind, driving the shock, and producing very energetic particles. Our results show that electromagnetic Alfvén waves are generated at the shock front. The waves propagate upstream of the shock and are produced by the counter-streaming ions of the solar wind plasma being reflected at the shock. A significant fraction of the particles are accelerated in two distinct phases: first, particles drift from the shock and are accelerated in the upstream region, and second, particles arriving at the shock get trapped and are accelerated at the shock front. A fraction of the particles diffused back to the shock, which is consistent with the Fermi acceleration mechanism.

  3. Particle acceleration, transport and turbulence in cosmic and heliospheric physics

    NASA Technical Reports Server (NTRS)

    Matthaeus, W.

    1992-01-01

    In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.

  4. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    SciTech Connect

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  5. Fluid Physics in a Fluctuating Acceleration Environment

    NASA Technical Reports Server (NTRS)

    Drolet, Francois; Vinals, Jorge

    1999-01-01

    Our program of research aims at developing a stochastic description of the residual acceleration field onboard spacecraft (g-jitter) to describe in quantitative detail its effect on fluid motion. Our main premise is that such a statistical description is necessary in those cases in which the characteristic time scales of the process under investigation are long compared with the correlation time of g-jitter. Although a clear separation between time scales makes this approach feasible, there remain several difficulties of practical nature: (i), g-jitter time series are not statistically stationary but rather show definite dependences on factors such as active or rest crew periods; (ii), it is very difficult to extract reliably the low frequency range of the power spectrum of the acceleration field. This range controls the magnitude of diffusive processes; and (iii), models used to date are Gaussian, but there is evidence that large amplitude disturbances occur much more frequently than a Gaussian distribution would predict. The lack of stationarity does not constitute a severe limitation in practice, since the intensity of the stochastic components changes very slowly during space missions (perhaps over times of the order of hours). A separate analysis of large amplitude disturbances has not been undertaken yet, but it does not seem difficult a priori to devise models that may describe this range better than a Gaussian distribution. The effect of low frequency components, on the other hand, is more difficult to ascertain, partly due to the difficulty associated with measuring them, and partly because they may be indistinguishable from slowly changing averages. This latter effect is further complicated by the lack of statistical stationarity of the time series. Recent work has focused on the effect of stochastic modulation on the onset of oscillatory instabilities as an example of resonant interaction between the driving acceleration and normal modes of the system

  6. Statistical physics, optimization and source coding

    NASA Astrophysics Data System (ADS)

    Zechhina, Riccardo

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  7. (Advanced accelerator physics featuring the problems of small rings)

    SciTech Connect

    Olsen, D.K.

    1989-10-16

    The traveler attended the CERN Accelerator School and Uppsala University short course on Advanced Accelerator Physics held on the University campus, Uppsala, Sweden, from September 18-29, 1989. The course, attended by 81 people, was well conceived, well presented, and informative. The course was organized and specialized on the problems of small rings. The traveler also visited the CELSIUS ring facility of Uppsala University and the CRYRING ring facility of the Manne Siegbahn Institute in Stockholm, Sweden.

  8. A tracking code for injection and acceleration studies in synchrotrons

    SciTech Connect

    Lessner, E.; Symon, K. |

    1996-11-01

    CAPTURE-SPC is a Monte-Carlo-based tracking program that simulates the injection and acceleration processes in proton synchrotrons. The time evolution of a distribution of charged particles is implemented by a symplectic, second-order-accurate integration algorithm. The recurrence relations follow a time-stepping leap--frog method. The time-step can be varied optionally to reduce computer time. Space-charge forces are calculated by binning the phase-projected particle distribution. The statistical fluctuations introduced by the binning process are reduced by presmoothing the data by the cloud-in-cell method and by filtering. Both the bin size and amount of filtering can be varied during the acceleration cycle so that the bunch fine structure is retained while the short wavelength noise is attenuated. The initial coordinates of each macro particle together with its time of injection are retained throughout the calculations. This information is useful in determining low-loss injection schemes.

  9. An introduction to the physics of high energy accelerators

    SciTech Connect

    Edwards, D.A.; Syphers, J.J.

    1993-01-01

    This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledge about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.

  10. DANTSYS: A diffusion accelerated neutral particle transport code system

    SciTech Connect

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  11. Proceedings of the conference on computer codes and the linear accelerator community

    SciTech Connect

    Cooper, R.K.

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  12. Plasma wakefield acceleration studies using the quasi-static code WAKE

    SciTech Connect

    Jain, Neeraj; Palastro, John; Antonsen, T. M.; Mori, Warren B.; An, Weiming

    2015-02-15

    The quasi-static code WAKE [P. Mora and T. Antonsen, Phys. Plasmas 4, 217 (1997)] is upgraded to model the propagation of an ultra-relativistic charged particle beam through a warm background plasma in plasma wakefield acceleration. The upgraded code is benchmarked against the full particle-in-cell code OSIRIS [Hemker et al., Phys. Rev. Spec. Top. Accel. Beams 3, 061301 (2000)] and the quasi-static code QuickPIC [Huang et al., J. Comput. Phys. 217, 658 (2006)]. The effect of non-zero plasma temperature on the peak accelerating electric field is studied for a two bunch electron beam driver with parameters corresponding to the plasma wakefield acceleration experiments at Facilities for Accelerator Science and Experimental Test Beams. It is shown that plasma temperature does not affect the energy gain and spread of the accelerated particles despite suppressing the peak accelerating electric field. The role of plasma temperature in improving the numerical convergence of the electric field with the grid resolution is discussed.

  13. Establishing confidence in complex physics codes: Art or science?

    SciTech Connect

    Trucano, T.

    1997-12-31

    The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The author elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.

  14. Future Accelerator Challenges in Support of High-Energy Physics

    SciTech Connect

    Zisman, Michael S.; Zisman, M.S.

    2008-05-03

    Historically, progress in high-energy physics has largely been determined by development of more capable particle accelerators. This trend continues today with the imminent commissioning of the Large Hadron Collider at CERN, and the worldwide development effort toward the International Linear Collider. Looking ahead, there are two scientific areas ripe for further exploration--the energy frontier and the precision frontier. To explore the energy frontier, two approaches toward multi-TeV beams are being studied, an electron-positron linear collider based on a novel two-beam powering system (CLIC), and a Muon Collider. Work on the precision frontier involves accelerators with very high intensity, including a Super-BFactory and a muon-based Neutrino Factory. Without question, one of the most promising approaches is the development of muon-beam accelerators. Such machines have very high scientific potential, and would substantially advance the state-of-the-art in accelerator design. The challenges of the new generation of accelerators, and how these can be accommodated in the accelerator design, are described. To reap their scientific benefits, all of these frontier accelerators will require sophisticated instrumentation to characterize the beam and control it with unprecedented precision.

  15. Toward a physics design for NDCX-II, an ion accelerator for warm dense matter and HIF target physics studies

    NASA Astrophysics Data System (ADS)

    Friedman, A.; Barnard, J. J.; Briggs, R. J.; Davidson, R. C.; Dorf, M.; Grote, D. P.; Henestroza, E.; Lee, E. P.; Leitner, M. A.; Logan, B. G.; Sefkow, A. B.; Sharp, W. M.; Waldron, W. L.; Welch, D. R.; Yu, S. S.

    2009-07-01

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaboration of LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity "tilt" to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of warm dense matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven inertial fusion energy (IFE). These goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned advanced test accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates a ˜30 nC pulse of Li+ ions to ˜3 MeV, then compresses it to ˜1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using an interactive one-dimensional kinetic simulation model and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.

  16. The use of electromagnetic particle-in-cell codes in accelerator applications

    SciTech Connect

    Eppley, K.

    1988-12-01

    The techniques developed for the numerical simulation of plasmas have numerous applications relevant to accelerators. The operation of many accelerator components involves transients, interactions between beams and rf fields, and internal plasma oscillations. These effects produce non-linear behavior which can be represented accurately by particle in cell (PIC) simulations. We will give a very brief overview of the algorithms used in PIC Codes. We will examine the range of parameters over which they are useful. We will discuss the factors which determine whether a two or three dimensional simulation is most appropriate. PIC codes have been applied to a wide variety of diverse problems, spanning many of the systems in a linear accelerator. We will present a number of practical examples of the application of these codes to areas such as guns, bunchers, rf sources, beam transport, emittance growth and final focus. 8 refs., 8 figs., 2 tabs.

  17. Omega3P: A Parallel Finite-Element Eigenmode Analysis Code for Accelerator Cavities

    SciTech Connect

    Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Ko, Kwok; /SLAC

    2009-03-04

    Omega3P is a parallel eigenmode calculation code for accelerator cavities in frequency domain analysis using finite-element methods. In this report, we will present detailed finite-element formulations and resulting eigenvalue problems for lossless cavities, cavities with lossy materials, cavities with imperfectly conducting surfaces, and cavities with waveguide coupling. We will discuss the parallel algorithms for solving those eigenvalue problems and demonstrate modeling of accelerator cavities through different examples.

  18. Physics of laser-driven plasma-based electron accelerators

    SciTech Connect

    Esarey, E.; Schroeder, C. B.; Leemans, W. P.

    2009-07-15

    Laser-driven plasma-based accelerators, which are capable of supporting fields in excess of 100 GV/m, are reviewed. This includes the laser wakefield accelerator, the plasma beat wave accelerator, the self-modulated laser wakefield accelerator, plasma waves driven by multiple laser pulses, and highly nonlinear regimes. The properties of linear and nonlinear plasma waves are discussed, as well as electron acceleration in plasma waves. Methods for injecting and trapping plasma electrons in plasma waves are also discussed. Limits to the electron energy gain are summarized, including laser pulse diffraction, electron dephasing, laser pulse energy depletion, and beam loading limitations. The basic physics of laser pulse evolution in underdense plasmas is also reviewed. This includes the propagation, self-focusing, and guiding of laser pulses in uniform plasmas and with preformed density channels. Instabilities relevant to intense short-pulse laser-plasma interactions, such as Raman, self-modulation, and hose instabilities, are discussed. Experiments demonstrating key physics, such as the production of high-quality electron bunches at energies of 0.1-1 GeV, are summarized.

  19. A new 3-D integral code for computation of accelerator magnets

    SciTech Connect

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.

  20. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab.

  1. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  2. Recent improvements of reactor physics codes in MHI

    SciTech Connect

    Kosaka, Shinya Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  3. Better physical activity classification using smartphone acceleration sensor.

    PubMed

    Arif, Muhammad; Bilal, Mohsin; Kattan, Ahmed; Ahamed, S Iqbal

    2014-09-01

    Obesity is becoming one of the serious problems for the health of worldwide population. Social interactions on mobile phones and computers via internet through social e-networks are one of the major causes of lack of physical activities. For the health specialist, it is important to track the record of physical activities of the obese or overweight patients to supervise weight loss control. In this study, acceleration sensor present in the smartphone is used to monitor the physical activity of the user. Physical activities including Walking, Jogging, Sitting, Standing, Walking upstairs and Walking downstairs are classified. Time domain features are extracted from the acceleration data recorded by smartphone during different physical activities. Time and space complexity of the whole framework is done by optimal feature subset selection and pruning of instances. Classification results of six physical activities are reported in this paper. Using simple time domain features, 99 % classification accuracy is achieved. Furthermore, attributes subset selection is used to remove the redundant features and to minimize the time complexity of the algorithm. A subset of 30 features produced more than 98 % classification accuracy for the six physical activities.

  4. A Spectral Verification of the HELIOS-2 Lattice Physics Code

    SciTech Connect

    D. S. Crawford; B. D. Ganapol; D. W. Nigg

    2012-11-01

    Core modeling of the Advanced Test Reactor (ATR) at INL is currently undergoing a significant update through the Core Modeling Update Project1. The intent of the project is to bring ATR core modeling in line with today’s standard of computational efficiency and verification and validation practices. The HELIOS-2 lattice physics code2 is the lead code of several reactor physics codes to be dedicated to modernize ATR core analysis. This presentation is concerned with an independent verification of the HELIOS-2 spectral representation including the slowing down and thermalization algorithm and its data dependency. Here, we will describe and demonstrate a recently developed simple cross section generation algorithm based entirely on analytical multigroup parameters for both the slowing down and thermal spectrum. The new capability features fine group detail to assess the flux and multiplication factor dependencies on cross section data sets using the fundamental infinite medium as an example.

  5. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  6. Simulation of Laser Wake Field Acceleration using a 2.5D PIC Code

    NASA Astrophysics Data System (ADS)

    An, W. M.; Hua, J. F.; Huang, W. H.; Tang, Ch. X.; Lin, Y. Z.

    2006-11-01

    A 2.5D PIC simulation code is developed to study the LWFA( Laser WakeField Acceleration ). The electron self-injection and the generation of mono-energetic electron beam in LWFA is briefly discussed through the simulation. And the experiment of this year at SILEX-I laser facility is also introduced.

  7. Accelerator Physics Challenges for the NSLS-II Project

    SciTech Connect

    Krinsky,S.

    2009-05-04

    The NSLS-II is an ultra-bright synchrotron light source based upon a 3-GeV storage ring with a 30-cell (15 super-period) double-bend-achromat lattice with damping wigglers used to lower the emittance below 1 nm. In this paper, we discuss the accelerator physics challenges for the design including: optimization of dynamic aperture; estimation of Touschek lifetime; achievement of required orbit stability; and analysis of ring impedance and collective effects.

  8. Accelerator physics highlights in the 1997/98 SLC run

    SciTech Connect

    Assmann, R.W.; Bane, K.L.F.; Barkow, T.

    1998-03-01

    The authors report various accelerator physics studies and improvements from the 1997/98 run at the Stanford Linear Collider (SLC). In particular, the authors discuss damping-ring lattice diagnostics, changes to the linac set up, fast control for linac rf phase stability, new emittance tuning strategies, wakefield reduction, modifications of the final-focus optics, longitudinal bunch shaping, and a novel spot-size control at the interaction point (IP).

  9. Accelerator physics in ERL based polarized electron ion collider

    SciTech Connect

    Hao, Yue

    2015-05-03

    This talk will present the current accelerator physics challenges and solutions in designing ERL-based polarized electron-hadron colliders, and illustrate them with examples from eRHIC and LHeC designs. These challenges include multi-pass ERL design, highly HOM-damped SRF linacs, cost effective FFAG arcs, suppression of kink instability due to beam-beam effect, and control of ion accumulation and fast ion instabilities.

  10. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  11. Code System for Reactor Physics and Fuel Cycle Simulation.

    1999-04-21

    Version 00 VSOP94 (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shut-down features, in-core and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterativemore » processes and to optimize the internal data transfer. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. In addition to its use in research and development work for the High Temperature Reactor, the system has been applied successfully to Light Water and Heavy Water Reactors.« less

  12. Code System for Reactor Physics and Fuel Cycle Simulation.

    SciTech Connect

    TEUCHERT, E.

    1999-04-21

    Version 00 VSOP94 (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shut-down features, in-core and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterative processes and to optimize the internal data transfer. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. In addition to its use in research and development work for the High Temperature Reactor, the system has been applied successfully to Light Water and Heavy Water Reactors.

  13. Theoretical atomic physics code development at Los Alamos

    SciTech Connect

    Clark, R.E.H.; Abdallah, J. Jr.

    1989-01-01

    We have developed a set of computer codes for atomic physics calculations at Los Alamos. These codes can calculate a large variety of data with a minimum of effort on the part of the user. In particular, differential cross sections and electron impact coherence parameters can be readily obtained for arbitrary ions or atoms. Currently, the theory consists of non-relativistic Hartree-Fock structure calculations and non relativistic distorted wave approximation or first order many body theory collisional calculations. 12 refs., 2 figs., 5 tabs.

  14. COMPASS, the COMmunity Petascale Project for Accelerator Science and Simulation, a broad computational accelerator physics initiative

    SciTech Connect

    J.R. Cary; P. Spentzouris; J. Amundson; L. McInnes; M. Borland; B. Mustapha; B. Norris; P. Ostroumov; Y. Wang; W. Fischer; A. Fedotov; I. Ben-Zvi; R. Ryne; E. Esarey; C. Geddes; J. Qiang; E. Ng; S. Li; C. Ng; R. Lee; L. Merminga; H. Wang; D.L. Bruhwiler; D. Dechow; P. Mullowney; P. Messmer; C. Nieter; S. Ovtchinnikov; K. Paul; P. Stoltz; D. Wade-Stein; W.B. Mori; V. Decyk; C.K. Huang; W. Lu; M. Tzoufras; F. Tsung; M. Zhou; G.R. Werner; T. Antonsen; T. Katsouleas

    2007-06-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  15. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-07-16

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  16. COMPASS, the COMmunity Petascale Project for Accelerator Science And Simulation, a Broad Computational Accelerator Physics Initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; /Jefferson Lab /Tech-X, Boulder /UCLA /Colorado U. /Maryland U. /Southern California U.

    2007-11-09

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  17. A Hilbert-Vlasov code for the study of high-frequency plasma beatwave accelerator

    SciTech Connect

    Ghizzo, A.; Bertrand, P.; Begue, M.L.; Johnston, T.W.; Shoucri, M.

    1996-04-01

    High-frequency beatwave simulations relevant to the University of California at Los Angeles (UCLA) experiment with relativistic eulerian hybrid Vlasov code are presented. These Hilbert-Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by beatwave experiment for a high ratio of driver frequency to plasma frequency {omega}{sub pump}/{omega}{sub pump} {approx} 33. The present model allows one to extend detailed modeling to frequency ratios greater than the current practical maximum of 10 or so, for Vlasov or particle-in-cell (PIC) codes, by replacing the Maxwell equations by mode equations for the electromagnetic Vlasov code. Numerical results, including beat frequency chirping (i.e., pump frequency linearly decreasing with time), show that the amplitude limit due to relativistic detuning can be enhanced with accelerated particles up to the ultrarelativistic energies with a high-acceleration gradient of more than 25 GeV/m.

  18. Enhanced Verification Test Suite for Physics Simulation Codes

    SciTech Connect

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater

  19. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator.

    PubMed

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm(2)× 10 cm(2) filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model.

  20. The solar physics FORWARD codes: Now with widgets!

    NASA Astrophysics Data System (ADS)

    Forland, Blake; Gibson, Sarah; Dove, James; Kucera, Therese

    2014-01-01

    We have developed a suite of forward-modeling IDL codes (FORWARD) to convert analytic models or simulation data cubes into coronal observables, allowing a direct comparison with observations. Observables such as extreme ultraviolet, soft X-ray, white light, and polarization images from the Coronal Multichannel Polarimeter (CoMP) can be reproduced. The observer's viewpoint is also incorporated in the FORWARD analysis and the codes can output the results in a variety of forms in order to easily create movies, Carrington maps, or simply observable information at a particular point in the plane of the sky. We present a newly developed front end to the FORWARD codes which utilizes IDL widgets to facilitate ease of use by the solar physics community. Our ultimate goal is to provide as useful a tool as possible for a broad range of scientific applications.

  1. Developing The Physics Desing for NDCS-II, A Unique Pulse-Compressing Ion Accelerator

    SciTech Connect

    Friedman, A; Barnard, J J; Cohen, R H; Grote, D P; Lund, S M; Sharp, W M; Faltens, A; Henestroza, E; Jung, J; Kwan, J W; Lee, E P; Leitner, M A; Logan, B G; Vay, J -; Waldron, W L; Davidson, R C; Dorf, M; Gilson, E P; Kaganovich, I

    2009-09-24

    The Heavy Ion Fusion Science Virtual National Laboratory (a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the 'warm dense matter' regime at {approx}< 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Test Accelerator at LLNL, NDCX-II will compress a {approx}500 ns pulse of Li{sup +} ions to {approx} 1 ns while accelerating it to 3-4 MeV over {approx} 15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.

  2. DEVELOPING THE PHYSICS DESIGN FOR NDCX-II, A UNIQUE PULSE-COMPRESSING ION ACCELERATOR

    SciTech Connect

    Friedman, A.; Barnard, J. J.; Cohen, R. H.; Grote, D. P.; Lund, S. M.; Sharp, W. M.; Faltens, A.; Henestroza, E.; Jung, J-Y.; Kwan, J. W.; Lee, E. P.; Leitner, M. A.; Logan, B. G.; Vay, J.-L.; Waldron, W. L.; Davidson, R.C.; Dorf, M.; Gilson, E.P.; Kaganovich, I.

    2009-07-20

    The Heavy Ion Fusion Science Virtual National Laboratory(a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the"warm dense matter" regime at<~;; 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Test Accelerator at LLNL, NDCX-II will compress a ~;;500 ns pulse of Li+ ions to ~;;1 ns while accelerating it to 3-4 MeV over ~;;15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.

  3. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    SciTech Connect

    Paul, Kevin; Huang, C.; Bruhwiler, D.L.; Mori, W.B.; Tsung, F.S.; Cormier-Michel, E.; Geddes, C.G.R.; Cowan, B.; Cary, J.R.; Esarey, E.; Fonseca, R.A.; Martins, S.F.; Silva, L.O.

    2008-09-08

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a0 and that full and reduced PIC agree well for values of a0 approaching 4.

  4. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    SciTech Connect

    Paul, K.; Bruhwiler, D. L.; Cowan, B.; Cary, J. R.; Huang, C.; Mori, W. B.; Tsung, F. S.; Cormier-Michel, E.; Geddes, C. G. R.; Esarey, E.; Fonseca, R. A.; Martins, S. F.; Silva, L. O.

    2009-01-22

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a{sub 0} and that full and reduced PIC agree well for values of a{sub 0} approaching 4.

  5. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  6. SCREAMm - modified code SCREAM to sumulate the acceleration of a pulsed beam through the superconducting linac

    SciTech Connect

    Eidelman, Yu.; Nagaitsev, S.; Solyak, N.; /Fermilab

    2011-07-01

    The code SCREAM - SuperConducting RElativistic particle Accelerator siMulation - was significantly modified and improved. Some misprints in the formulae used have been fixed and a more realistic expression for the vector-sum introduced. The realistic model of Lorentz-force detuning (LFD) is developed and will be implemented to the code. A friendly GUI allows various parameters of the simulated problem to be changed easily and quickly. Effective control of various output data is provided. A change of various parameters during the simulation process is controlled by plotting the corresponding graphs 'on the fly'. A large collection of various graphs can be used to illustrate the results.

  7. Accelerator-driven molten-salt blankets: Physics issues

    SciTech Connect

    Houts, M.G.; Beard, C.A.; Buksa, J.J.; Davidson, J.W.; Durkee, J.W.; Perry, R.T.; Poston, D.I.

    1994-10-01

    A number of nuclear physics issues concerning the Los Alamos molten-salt accelerator-driven plutonium converter are discussed. General descriptions of several concepts using internal and external moderation are presented. Burnup and salt processing requirement calculations are presented for four concepts, indicating that both the high power density externally moderated concept and an internally moderated concept achieve total plutonium burnups approaching 90% at salt processing rates of less than 2 m{sup 3} per year. Beginning-of-life reactivity temperature coefficients and system kinetic response are also discussed. Future research should investigate the effect of changing blanket composition on operational and safety characteristics.

  8. Operational Radiation Protection in High-Energy Physics Accelerators

    SciTech Connect

    Rokni, S.H.; Fasso, A.; Liu, J.C.; /SLAC

    2012-04-03

    An overview of operational radiation protection (RP) policies and practices at high-energy electron and proton accelerators used for physics research is presented. The different radiation fields and hazards typical of these facilities are described, as well as access control and radiation control systems. The implementation of an operational RP programme is illustrated, covering area and personnel classification and monitoring, radiation surveys, radiological environmental protection, management of induced radioactivity, radiological work planning and control, management of radioactive materials and wastes, facility dismantling and decommissioning, instrumentation and training.

  9. Innovative Applications of Genetic Algorithms to Problems in Accelerator Physics

    SciTech Connect

    Hofler, Alicia; Terzic, Balsa; Kramer, Matthew; Zvezdin, Anton; Morozov, Vasiliy; Roblin, Yves; Lin, Fanglei; Jarvis, Colin

    2013-01-01

    The genetic algorithm (GA) is a relatively new technique that implements the principles nature uses in biological evolution in order to optimize a multidimensional nonlinear problem. The GA works especially well for problems with a large number of local extrema, where traditional methods (such as conjugate gradient, steepest descent, and others) fail or, at best, underperform. The field of accelerator physics, among others, abounds with problems which lend themselves to optimization via GAs. In this paper, we report on the successful application of GAs in several problems related to the existing CEBAF facility, the proposed MEIC at Jefferson Lab, and a radio frequency (RF) gun based injector. These encouraging results are a step forward in optimizing accelerator design and provide an impetus for application of GAs to other problems in the field. To that end, we discuss the details of the GAs used, including a newly devised enhancement, which leads to improved convergence to the optimum and make recommendations for future GA developments and accelerator applications.

  10. Underground Accelerators for Precise Nuclear Physics: LUNA and DIANA

    NASA Astrophysics Data System (ADS)

    Leitner, Daniela

    2011-05-01

    Current stellar model simulations are at a level of precision that uncertainties in the nuclear-reaction rates are becoming significant for theoretical predictions and for the analysis of observational signatures. To address several open questions in cosmology, astrophysics, and non-Standard-Model neutrino physics, new high precision measurements of direct-capture nuclear fusion cross sections will be essential. At these low energies, fusion cross sections decrease exponentially with energy and are expected to approach femtobarn levels or less. The experimental difficulties in determining the low-energy cross sections are caused by large background rates associated with cosmic ray-induced reactions, background from natural radioactivity in the laboratory environment, and the beam-induced background on target impurities. Natural background can be reduced by careful shielding of the target and detector environment, and beam-induced background can be reduced by active shielding techniques through event identification, but it is difficult to reduce the background component from cosmic ray muons. An underground location has the advantage that the cosmic ray-induced background is reduced by several orders of magnitude, allowing the measurements to be pushed to far lower energies than feasible above ground. This has been clearly demonstrated at LUNA by the successful studies of critical reactions in the pp-chains and first reaction studies in the CNO cycles. The DIANA project (Dakota Ion Accelerators for Nuclear Astrophysics) is a collaboration between the University of Notre Dame, Michigan State University, Colorado School of Mines, Regis University, University of North Carolina, Western Michigan University, and Lawrence Berkeley National Laboratory, to build a nuclear astrophysics accelerator facility deep underground. The DIANA accelerator facility is being designed to achieve large laboratory reaction rates by delivering two orders of magnitude higher ion beams to a

  11. The CINDER'90 transmutation code package for use in accelerator applications in combination with MCNPX

    SciTech Connect

    Gallmeier, Franz X.; Ferguson, Phillip D.; Lu, Wei; Iverson, Erik B.; Muhrer, Guenter; Holloway, Shannon T.; Kelsey, Charles; Pitcher, Eric; Wohlmuther, Michael; Micklich, Bradley J.

    2010-01-01

    CINDER'90, a nuclear inventory code originated at the Bettis Atomic Power Laboratory for reactor irradiation calculations and extended for use of in accelerator dr iven systems and high-energy applications at Los Alamos National Laboratory, has been released as a code package for distribution through the Radiation Safety Information Computational Center (RSICC). The code package and its updated data libraries come with several scripts that allow calculations of multi-cell problems in combination with the radiation transport code MCNPX. A script was developed that manages all the pre-processing steps extracting the necessary information from MCNPX output or from one input file, and that runs the CINDER’90 code for a requested list of MCNPX cells and for a requested time history. A second script was developed that extracts the decay photon sources from CINDER’90 output for a requested list of cells and for a requested irradiation or decay time step and builds source deck for subsequent MCNPX calculations. Since the package release, improvements to CINDER’90 are underway in algorithms, libraries, and interfaces to transport codes.

  12. GPU-accelerated 3D neutron diffusion code based on finite difference method

    SciTech Connect

    Xu, Q.; Yu, G.; Wang, K.

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  13. Status of Continuum Edge Gyrokinetic Code Physics Development

    SciTech Connect

    Xu, X Q; Xiong, Z; Dorr, M R; Hittinger, J A; Kerbel, G D; Nevins, W M; Cohen, B I; Cohen, R H

    2005-05-31

    We are developing an edge gyro-kinetic continuum simulation code to study the boundary plasma over a region extending from inside the H-mode pedestal across the separatrix to the divertor plates. A 4-D ({psi}, {theta}, {epsilon}, {mu}) version of this code is presently being implemented, en route to a full 5-D version. A set of gyrokinetic equations[1] are discretized on computational grid which incorporates X-point divertor geometry. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. A fourth order upwinding algorithm is used for particle cross-field drifts, parallel streaming, and acceleration. Boundary conditions at conducting material surfaces are implemented on the plasma side of the sheath. The Poisson-like equation is solved using GMRES with multi-grid preconditioner from HYPRE. A nonlinear Fokker-Planck collision operator from STELLA[2] in ({nu}{sub {parallel}},{nu}{sub {perpendicular}}) has been streamlined and integrated into the gyro-kinetic package using the same implicit Newton-Krylov solver and interpolating F and dF/dt|{sub coll} to/from ({epsilon}, {mu}) space. With our 4D code we compute the ion thermal flux, ion parallel velocity, self-consistent electric field, and geo-acoustic oscillations, which we compare with standard neoclassical theory for core plasma parameters; and we study the transition from collisional to collisionless end-loss. In the real X-point geometry, we find that the particles are trapped near outside midplane and in the X-point regions due to the magnetic configurations. The sizes of banana orbits are comparable to the pedestal width and/or the SOL width for energetic trapped particles. The effect of the real X-point geometry and edge plasma conditions on standard neoclassical theory will be evaluated, including a comparison of our 4D code with other kinetic

  14. Physics and engineering studies on the MITICA accelerator: comparison among possible design solutions

    SciTech Connect

    Agostinetti, P.; Antoni, V.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.; Cavenago, M.

    2011-09-26

    Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.

  15. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    1988-12-01

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs.

  16. Lattice physics capabilities of the SCALE code system using TRITON

    SciTech Connect

    DeHart, M. D.

    2006-07-01

    This paper describes ongoing calculations used to validate the TRITON depletion module in SCALE for light water reactor (LWR) fuel lattices. TRITON has been developed to provide improved resolution for lattice physics mixed-oxide fuel assemblies as programs to burn such fuel in the United States begin to come online. Results are provided for coupled TRITON/PARCS analyses of an LWR core in which TRITON was employed for generation of appropriately weighted few-group nodal cross-sectional sets for use in core-level calculations using PARCS. Additional results are provided for code-to-code comparisons for TRITON and a suite of other depletion packages in the modeling of a conceptual next-generation boiling water reactor fuel assembly design. Results indicate that the set of SCALE functional modules used within TRITON provide an accurate means for lattice physics calculations. Because the transport solution within TRITON provides a generalized-geometry capability, this capability is extensible to a wide variety of non-traditional and advanced fuel assembly designs. (authors)

  17. ASP2012: Fundamental Physics and Accelerator Sciences in Africa

    NASA Astrophysics Data System (ADS)

    Darve, Christine

    2012-02-01

    Much remains to be done to improve education and scientific research in Africa. Supported by the international scientific community, our initiative has been to contribute to fostering science in sub-Saharan Africa by establishing a biennial school on fundamental subatomic physics and its applications. The school is based on a close interplay between theoretical, experimental, and applied physics. The lectures are addressed to students or young researchers with at least a background of 4 years of university formation. The aim of the school is to develop capacity, interpret, and capitalize on the results of current and future physics experiments with particle accelerators; thereby spreading education for innovation in related applications and technologies, such as medicine and information science. Following the worldwide success of the first school edition, which gathered 65 students for 3-week in Stellenbosch (South Africa) in August 2010, the second edition will be hosted in Ghana from July 15 to August 4, 2012. The school is a non-profit organization, which provides partial or full financial support to 50 of the selected students, with priority to Sub-Saharan African students.

  18. Data Evaluation Acquired Talys 1.0 Code to Produce 111In from Various Accelerator-Based Reactions

    NASA Astrophysics Data System (ADS)

    Alipoor, Zahra; Gholamzadeh, Zohreh; Sadeghi, Mahdi; Seyyedi, Solaleh; Aref, Morteza

    The Indium-111 physical-decay parameters as a β-emitter radionuclide show some potential for radiodiagnostic and radiotherapeutic purposes. Medical investigators have shown that 111In is an important radionuclide for locating and imaging certain tumors, visualization of the lymphatic system and thousands of labeling reactions have been suggested. The TALYS 1.0 code was used here to calculate excitation functions of 112/114-118Sn+p, 110Cd+3He, 109Ag+3He, 111-114Cd+p, 110/111Cd+d, 109Ag+α to produce 111In using low and medium energy accelerators. Calculations were performed up to 200 MeV. Appropriate target thicknesses have been assumed based on energy loss calculations with the SRIM code. Theoretical integral yields for all the latter reactions were calculated. The TALYS 1.0 code predicts that the production of a few curies of 111In is feasible using a target of isotopically highly enriched 112Cd and a proton energy between 12 and 25 MeV with a production rate as 248.97 MBq·μA-1 · h-1. Minimum impurities shall be produced during the proton irradiation of an enriched 111Cd target yielding a production rate for 111In of 67.52 MBq· μA-1 · h-1.

  19. The Scanning Electron Microscope As An Accelerator For The Undergraduate Advanced Physics Laboratory

    SciTech Connect

    Peterson, Randolph S.; Berggren, Karl K.; Mondol, Mark

    2011-06-01

    Few universities or colleges have an accelerator for use with advanced physics laboratories, but many of these institutions have a scanning electron microscope (SEM) on site, often in the biology department. As an accelerator for the undergraduate, advanced physics laboratory, the SEM is an excellent substitute for an ion accelerator. Although there are no nuclear physics experiments that can be performed with a typical 30 kV SEM, there is an opportunity for experimental work on accelerator physics, atomic physics, electron-solid interactions, and the basics of modern e-beam lithography.

  20. Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.

    SciTech Connect

    2014-10-01

    Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: � Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to \\ � Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7.

  1. Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.

    2014-10-01

    Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosivemore » release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: � Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to \\ � Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7

  2. A GPU accelerated Barnes-Hut tree code for FLASH4

    NASA Astrophysics Data System (ADS)

    Lukat, Gunther; Banerjee, Robi

    2016-05-01

    We present a GPU accelerated CUDA-C implementation of the Barnes Hut (BH) tree code for calculating the gravitational potential on octree adaptive meshes. The tree code algorithm is implemented within the FLASH4 adaptive mesh refinement (AMR) code framework and therefore fully MPI parallel. We describe the algorithm and present test results that demonstrate its accuracy and performance in comparison to the algorithms available in the current FLASH4 version. We use a MacLaurin spheroid to test the accuracy of our new implementation and use spherical, collapsing cloud cores with effective AMR to carry out performance tests also in comparison with previous gravity solvers. Depending on the setup and the GPU/CPU ratio, we find a speedup for the gravity unit of at least a factor of 3 and up to 60 in comparison to the gravity solvers implemented in the FLASH4 code. We find an overall speedup factor for full simulations of at least factor 1.6 up to a factor of 10.

  3. The r-Java 2.0 code: nuclear physics

    NASA Astrophysics Data System (ADS)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  4. Topics in radiation at accelerators: Radiation physics for personnel and environmental protection

    SciTech Connect

    Cossairt, J.D.

    1996-10-01

    In the first chapter, terminology, physical and radiological quantities, and units of measurement used to describe the properties of accelerator radiation fields are reviewed. The general considerations of primary radiation fields pertinent to accelerators are discussed. The primary radiation fields produced by electron beams are described qualitatively and quantitatively. In the same manner the primary radiation fields produced by proton and ion beams are described. Subsequent chapters describe: shielding of electrons and photons at accelerators; shielding of proton and ion accelerators; low energy prompt radiation phenomena; induced radioactivity at accelerators; topics in radiation protection instrumentation at accelerators; and accelerator radiation protection program elements.

  5. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  6. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  7. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    SciTech Connect

    Vay, J-L; Grote, D. P.; Cohen, R. H.; Friedman, A.

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  8. Detailed numerical modeling of electron injection in the Laser Wakefield Accelerator: Particle Tracking Diagnostics in PIC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Gargaté, L.; Martins, S. F.; Peano, F.; Vieira, J.; Silva, L. O.; Mori, W. B.

    2007-11-01

    The field of laser plasma acceleration has witnessed significant development over recent years, with experimental demonstrations of the production of quasi mono-energetic electron bunches, with charges of ˜ 50 pC and energies of up to 1 GeV [1]. Fully relativistic PIC codes, such as OSIRIS [2] are the best tools for modeling these problems, but sophisticated visualization and data analysis routines [3] are required to extract physical meaning from the large volumes of data produced. We report on the new particle tracking diagnostics being added into the OSIRIS framework and its application to this problem, specifically targeting self-injection. Details on the tracking algorithm implementation and post processing routines are given. Simulation results from laser wakefield accelerator scenarios will be presented, with detailed analysis of the self injection of the electron bunches. [1] W.P. Leemans et al, Nature Phys. 2 696 (2006) [2] R. A. Fonseca et al., LNCS 2331, 342, (2002) [3] R. A. Fonseca, Proceedings of ISSS-7, (2005)

  9. ASTRAL Code for Problems of Astrophysics and High Energy Density Physics

    NASA Astrophysics Data System (ADS)

    Chizhkova, N. E.; Ionov, G. V.; Karlykhanov, N. G.; Simonenko, V. A.

    2006-08-01

    The paper gives a brief description of ASTRAL code package for astrophysics simulations, including features in the implementation of basic physical processes and two tests. A sketch of the object code structure is provided.

  10. Acceleration of the Geostatistical Software Library (GSLIB) by code optimization and hybrid parallel programming

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.

    2015-12-01

    The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.

  11. Pelegant : a parallel accelerator simulation code for electron generation and tracking.

    SciTech Connect

    Wang, Y.; Borland, M. D.; Accelerator Systems Division

    2006-01-01

    elegant is a general-purpose code for electron accelerator simulation that has a worldwide user base. Recently, many of the time-intensive elements were parallelized using MPI. Development has used modest Linux clusters and the BlueGene/L supercomputer at Argonne National Laboratory. This has provided very good performance for some practical simulations, such as multiparticle tracking with synchrotron radiation and emittance blow-up in the vertical rf kick scheme. The effort began with development of a concept that allowed for gradual parallelization of the code, using the existing beamline-element classification table in elegant. This was crucial as it allowed parallelization without major changes in code structure and without major conflicts with the ongoing evolution of elegant. Because of rounding error and finite machine precision, validating a parallel program against a uniprocessor program with the requirement of bitwise identical results is notoriously difficult. We will report validating simulation results of parallel elegant against those of serial elegant by applying Kahan's algorithm to improve accuracy dramatically for both versions. The quality of random numbers in a parallel implementation is very important for some simulations. Some practical experience with generating parallel random numbers by offsetting the seed of each random sequence according to the processor ID will be reported.

  12. Doing accelerator physics using SDDS, UNIX, and EPICS

    SciTech Connect

    Borland, M.; Emery, L.; Sereno, N.

    1995-12-31

    The use of the SDDS (Self-Describing Data Sets) file protocol, together with the UNIX operating system and EPICS (Experimental Physics and Industrial Controls System), has proved powerful during the commissioning of the APS (Advanced Photon Source) accelerator complex. The SDDS file protocol has permitted a tool-oriented approach to developing applications, wherein generic programs axe written that function as part of multiple applications. While EPICS-specific tools were written for data collection, automated experiment execution, closed-loop control, and so forth, data processing and display axe done with the SDDS Toolkit. Experiments and data reduction axe implemented as UNIX shell scripts that coordinate the execution of EPICS specific tools and SDDS tools. Because of the power and generic nature of the individual tools and of the UNIX shell environment, automated experiments can be prepared and executed rapidly in response to unanticipated needs or new ideas. Examples are given of application of this methodology to beam motion characterization, beam-position-monitor offset measurements, and klystron characterization.

  13. Review of Basic Physics of Laser-Accelerated Charged-Particle Beams

    SciTech Connect

    Suk, H.; Hur, M. S.; Jang, H.; Kim, J.

    2007-07-11

    Laser-plasma wake wave can accelerate charged particles, especially electrons with an enormously large acceleration gradient. The electrons in the plasma wake wave have complicated motions in the longitudinal and transverse directions. In this paper, basic physics of the laser-accelerated electron beam is reviewed.

  14. Physics and numerics of the tensor code (incomplete preliminary documentation)

    SciTech Connect

    Burton, D.E.; Lettis, L.A. Jr.; Bryan, J.B.; Frary, N.R.

    1982-07-15

    The present TENSOR code is a descendant of a code originally conceived by Maenchen and Sack and later adapted by Cherry. Originally, the code was a two-dimensional Lagrangian explicit finite difference code which solved the equations of continuum mechanics. Since then, implicit and arbitrary Lagrange-Euler (ALE) algorithms have been added. The code has been used principally to solve problems involving the propagation of stress waves through earth materials, and considerable development of rock and soil constitutive relations has been done. The code has been applied extensively to the containment of underground nuclear tests, nuclear and high explosive surface and subsurface cratering, and energy and resource recovery. TENSOR is supported by a substantial array of ancillary routines. The initial conditions are set up by a generator code TENGEN. ZON is a multipurpose code which can be used for zoning, rezoning, overlaying, and linking from other codes. Linking from some codes is facilitated by another code RADTEN. TENPLT is a fixed time graphics code which provides a wide variety of plotting options and output devices, and which is capable of producing computer movies by postprocessing problem dumps. Time history graphics are provided by the TIMPLT code from temporal dumps produced during production runs. While TENSOR can be run as a stand-alone controllee, a special controller code TCON is available to better interface the code with the LLNL computer system during production jobs. In order to standardize compilation procedures and provide quality control, a special compiler code BC is used. A number of equation of state generators are available among them ROC and PMUGEN.

  15. Physics models in the toroidal transport code PROCTR

    SciTech Connect

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  16. The Los Alamos suite of relativistic atomic physics codes

    SciTech Connect

    Fontes, C. J.; Zhang, H. L.; Jr, J. Abdallah; Clark, R. E. H.; Kilcrease, D. P.; Colgan, J.; Cunningham, R. T.; Hakel, P.; Magee, N. H.; Sherrill, M. E.

    2015-05-28

    The Los Alamos SuitE of Relativistic (LASER) atomic physics codes is a robust, mature platform that has been used to model highly charged ions in a variety of ways. The suite includes capabilities for calculating data related to fundamental atomic structure, as well as the processes of photoexcitation, electron-impact excitation and ionization, photoionization and autoionization within a consistent framework. These data can be of a basic nature, such as cross sections and collision strengths, which are useful in making predictions that can be compared with experiments to test fundamental theories of highly charged ions, such as quantum electrodynamics. The suite can also be used to generate detailed models of energy levels and rate coefficients, and to apply them in the collisional-radiative modeling of plasmas over a wide range of conditions. Such modeling is useful, for example, in the interpretation of spectra generated by a variety of plasmas. In this work, we provide a brief overview of the capabilities within the Los Alamos relativistic suite along with some examples of its application to the modeling of highly charged ions.

  17. The Los Alamos suite of relativistic atomic physics codes

    DOE PAGES

    Fontes, C. J.; Zhang, H. L.; Jr, J. Abdallah; Clark, R. E. H.; Kilcrease, D. P.; Colgan, J.; Cunningham, R. T.; Hakel, P.; Magee, N. H.; Sherrill, M. E.

    2015-05-28

    The Los Alamos SuitE of Relativistic (LASER) atomic physics codes is a robust, mature platform that has been used to model highly charged ions in a variety of ways. The suite includes capabilities for calculating data related to fundamental atomic structure, as well as the processes of photoexcitation, electron-impact excitation and ionization, photoionization and autoionization within a consistent framework. These data can be of a basic nature, such as cross sections and collision strengths, which are useful in making predictions that can be compared with experiments to test fundamental theories of highly charged ions, such as quantum electrodynamics. The suitemore » can also be used to generate detailed models of energy levels and rate coefficients, and to apply them in the collisional-radiative modeling of plasmas over a wide range of conditions. Such modeling is useful, for example, in the interpretation of spectra generated by a variety of plasmas. In this work, we provide a brief overview of the capabilities within the Los Alamos relativistic suite along with some examples of its application to the modeling of highly charged ions.« less

  18. LCODE: A parallel quasistatic code for computationally heavy problems of plasma wakefield acceleration

    NASA Astrophysics Data System (ADS)

    Sosedkin, A. P.; Lotov, K. V.

    2016-09-01

    LCODE is a freely distributed quasistatic 2D3V code for simulating plasma wakefield acceleration, mainly specialized at resource-efficient studies of long-term propagation of ultrarelativistic particle beams in plasmas. The beam is modeled with fully relativistic macro-particles in a simulation window copropagating with the light velocity; the plasma can be simulated with either kinetic or fluid model. Several techniques are used to obtain exceptional numerical stability and precision while maintaining high resource efficiency, enabling LCODE to simulate the evolution of long particle beams over long propagation distances even on a laptop. A recent upgrade enabled LCODE to perform the calculations in parallel. A pipeline of several LCODE processes communicating via MPI (Message-Passing Interface) is capable of executing multiple consecutive time steps of the simulation in a single pass. This approach can speed up the calculations by hundreds of times.

  19. Shielding Assessment of the MYRRHA Accelerator-Driven System Using the MCNP Code

    NASA Astrophysics Data System (ADS)

    Coeck, M.; Aoust, Th.; Vermeersch, F.; Abderrahim, A.

    The MYRRHA project includes the design and the development of an accelerator driven system (ADS) aimed at providing protons and neutrons for various R&D applications. With regard to the safety aspects, the assessment of the shielding and of the dose rates around the installation is an important task. In a first approach standard semi-empirical equations and attenuation factors found in the literature were applied. A more detailed determination of the neutron flux around the reactor is made here by Monte Carlo simulation with the code MCNP4B. The results of the shielding assessment give an estimate of the neutron flux at several positions around the core vessel and along the beam tube. Dose rates will be determined by applying the ICRP74 conversion factor.

  20. Genetic algorithms and their applications in accelerator physics

    SciTech Connect

    Hofler, Alicia S.

    2013-12-01

    Multi-objective optimization techniques are widely used in an extremely broad range of fields. Genetic optimization for multi-objective optimization was introduced in the accelerator community in relatively recent times and quickly spread becoming a fundamental tool in multi-dimensional optimization problems. This discussion introduces the basics of the technique and reviews applications in accelerator problems.

  1. Accelerator mass spectrometry: from nuclear physics to dating

    SciTech Connect

    Kutschera, W.

    1983-01-01

    Several applications of accelerator-based mass spectroscopy are reviewed. Among these are the search for unknown species, determination of comogenic radioisotopes in natural materials and measurements of half-lifes, especially those of significance to dating. Accelerator parameters and techniques of importance for these applications are also considered.

  2. Multiple-source models for electron beams of a medical linear accelerator using BEAMDP computer code

    PubMed Central

    Jabbari, Nasrollah; Barati, Amir Hoshang; Rahmatnezhad, Leili

    2012-01-01

    Aim The aim of this work was to develop multiple-source models for electron beams of the NEPTUN 10PC medical linear accelerator using the BEAMDP computer code. Background One of the most accurate techniques of radiotherapy dose calculation is the Monte Carlo (MC) simulation of radiation transport, which requires detailed information of the beam in the form of a phase-space file. The computing time required to simulate the beam data and obtain phase-space files from a clinical accelerator is significant. Calculation of dose distributions using multiple-source models is an alternative method to phase-space data as direct input to the dose calculation system. Materials and methods Monte Carlo simulation of accelerator head was done in which a record was kept of the particle phase-space regarding the details of the particle history. Multiple-source models were built from the phase-space files of Monte Carlo simulations. These simplified beam models were used to generate Monte Carlo dose calculations and to compare those calculations with phase-space data for electron beams. Results Comparison of the measured and calculated dose distributions using the phase-space files and multiple-source models for three electron beam energies showed that the measured and calculated values match well each other throughout the curves. Conclusion It was found that dose distributions calculated using both the multiple-source models and the phase-space data agree within 1.3%, demonstrating that the models can be used for dosimetry research purposes and dose calculations in radiotherapy. PMID:24377026

  3. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As

  4. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1992-01-01

    In the area of solar physics, new calculations of the acoustic wave energy fluxes generated in the solar convective zone was performed. The original theory developed was corrected by including a new frequency factor describing temporal variations of the turbulent energy spectrum. We have modified the original Stein code by including this new frequency factor, and tested the code extensively. Another possible source of the mechanical energy generated in the solar convective zone is the excitation of magnetic flux tube waves which can carry energy along the tubes far away from the region. The problem as to how efficiently those waves are generated in the Sun was recently solved. The propagation of nonlinear magnetic tube waves in the solar atmosphere was calculated, and mode coupling, shock formation, and heating of the local medium was studied. The wave trapping problems and evaluation of critical frequencies for wave reflection in the solar atmosphere was studied. It was shown that the role played by Alfven waves in the wind accelerations and the coronal hole heating is dominant. Presently, we are performing calculations of wave energy fluxes generated in late-type dwarf stars and studying physical processes responsible for the heating of stellar chromospheres and coronae. In the area of physics of waves, a new analytical approach for studying linear Alfven waves in smoothly nonuniform media was recently developed. This approach is presently being extended to study the propagation of linear and nonlinear magnetohydrodynamic (MHD) waves in stratified, nonisothermal and solar atmosphere. The Lighthill theory of sound generation to nonisothermal media (with a special temperature distribution) was extended. Energy cascade by nonlinear MHD waves and possible chaos driven by these waves are presently considered.

  5. Modeling Laser Wake Field Acceleration with the Quasi-Static PIC Code QuickPIC

    SciTech Connect

    Vieira, J.; Antonsen, T. Jr.; Cooley, J.; Silva, L. O.

    2006-11-27

    We use the Quasi-static Particle-In-Cell code QuickPIC to model laser wake field acceleration, in both uniform and parabolic plasma channels within current state of the art experimental laser and plasma parameters. QuickPIC uses the quasi-static approximation, which allows the separation of the plasma and laser evolution, as they respond in different time scales. The laser is evolved with a larger time step, that correctly resolves distances of the order of the Rayleigh length, according to the ponderomotive guiding center approximation, while the plasma response is calculated through a quasi-static field solver for each transverse 2d slice. We have performed simulations that show very good agreement between QuickPIC and three dimensional simulations using the full PIC code OSIRIS. We have scanned laser intensities from those for which linear plasma waves are excited to those for which the plasma response is highly nonlinear. For these simulations, QuickPIC was 2-3 orders of magnitude faster than OSIRIS.

  6. Medical physics--particle accelerators--the beginning.

    PubMed

    Ganz, Jeremy C

    2014-01-01

    This chapter outlines the early development of particle accelerators with the redesign from linear accelerator to cyclotron by Ernest Lawrence with a view to reducing the size of the machines as the power increased. There are minibiographies of Ernest Lawrence and his brother John. The concept of artificial radiation is outlined and the early attempts at patient treatment are mentioned. The reasons for trying and abandoning neutron therapy are discussed, and the early use of protons is described.

  7. Evaluation of ‘OpenCL for FPGA’ for Data Acquisition and Acceleration in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Sridharan, Srikanth

    2015-12-01

    The increase in the data acquisition and processing needs of High Energy Physics experiments has made it more essential to use FPGAs to meet those needs. However harnessing the capabilities of the FPGAs has been hard for anyone but expert FPGA developers. The arrival of OpenCL with the two major FPGA vendors supporting it, offers an easy software-based approach to taking advantage of FPGAs in applications such as High Energy Physics. OpenCL is a language for using heterogeneous architectures in order to accelerate applications. However, FPGAs are capable of far more than acceleration, hence it is interesting to explore if OpenCL can be used to take advantage of FPGAs for more generic applications. To answer these questions, especially in the context of High Energy Physics, two applications, a DAQ module and an acceleration workload, were tested for implementation with OpenCL on FPGAs2. The challenges on using OpenCL for a DAQ application and their solutions, together with the performance of the OpenCL based acceleration are discussed. Many of the design elements needed to realize a DAQ system in OpenCL already exists, mostly as FPGA vendor extensions, but a small number of elements were found to be missing. For acceleration of OpenCL applications, using FPGAs has become as easy as using GPUs. OpenCL has the potential for a massive gain in productivity and ease of use enabling non FPGA experts to design, debug and maintain the code. Also, FPGA power consumption is much lower than other implementations. This paper describes one of the first attempts to explore the use of OpenCL for applications outside the acceleration workloads.

  8. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    SciTech Connect

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  9. Formation and Acceleration Physics on Plasma Injector 1

    NASA Astrophysics Data System (ADS)

    Howard, Stephen

    2012-10-01

    Plasma Injector 1 (PI-1) is a two stage coaxial Marshal gun with conical accelerator electrodes, similar in shape to the MARAUDER device, with power input of the same topology as the RACE device. The goal of PI-1 research is to produce a self-confined compact toroid with high-flux (200 mWb), high-density (3x10^16 cm-3) and moderate initial temperature (100 eV) to be used as the target plasma in a MTF reactor. PI-1 is 5 meters long and 1.9 m in diameter at the expansion region where a high aspect ratio (4.4) spheromak is formed with a minimum lambda of 9 m-1. The acceleration stage is 4 m long and tapers to an outer diameter of 40 cm. The capacitor banks store 0.5 MJ for formation and 1.13 MJ for acceleration. Power is delivered via 62 independently controlled switch modules. Several geometries for formation bias field, inner electrodes and target chamber have been tested, and trends in accelerator efficiency and target lifetime have been observed. Thomson scattering and ion Doppler spectroscopy show significant heating (>100 eV) as the CT is compressed in the conical accelerator. B-dot probes show magnetic field structure consistent with Grad-Shafranov models and MHD simulations, and CT axial length depends strongly on the lambda profile.

  10. A theory manual for multi-physics code coupling in LIME.

    SciTech Connect

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  11. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  12. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  13. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    SciTech Connect

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.

  14. International Linear Collider Accelerator Physics R&D

    SciTech Connect

    George D. Gollin; Michael Davidsaver; Michael J. Haney; Michael Kasten; Jason Chang; Perry Chodash; Will Dluger; Alex Lang; Yehan Liu

    2008-09-03

    ILC work at Illinois has concentrated primarily on technical issues relating to the design of the accelerator. Because many of the problems to be resolved require a working knowledge of classical mechanics and electrodynamics, most of our research projects lend themselves well to the participation of undergraduate research assistants. The undergraduates in the group are scientists, not technicians, and find solutions to problems that, for example, have stumped PhD-level staff elsewhere. The ILC Reference Design Report calls for 6.7 km circumference damping rings (which prepare the beams for focusing) using “conventional” stripline kickers driven by fast HV pulsers. Our primary goal was to determine the suitability of the 16 MeV electron beam in the AØ region at Fermilab for precision kicker studies.We found that the low beam energy and lack of redundancy in the beam position monitor system complicated the analysis of our data. In spite of these issues we concluded that the precision we could obtain was adequate to measure the performance and stability of a production module of an ILC kicker, namely 0.5%. We concluded that the kicker was stable to an accuracy of ~2.0% and that we could measure this precision to an accuracy of ~0.5%. As a result, a low energy beam like that at AØ could be used as a rapid-turnaround facility for testing ILC production kicker modules. The ILC timing precision for arrival of bunches at the collision point is required to be 0.1 picosecond or better. We studied the bunch-to-bunch timing accuracy of a “phase detector” installed in AØ in order to determine its suitability as an ILC bunch timing device. A phase detector is an RF structure excited by the passage of a bunch. Its signal is fed through a 1240 MHz high-Q resonant circuit and then down-mixed with the AØ 1300 MHz accelerator RF. We used a kind of autocorrelation technique to compare the phase detector signal with a reference signal obtained from the phase detector

  15. CEBAF: The Continuous Electron Beam Accelerator Facility and its Physics Program

    SciTech Connect

    Mougey, Jean

    1992-01-01

    With the 4 GeV Continuous Electron Beam Accelerator Facility presently under construction in Newport News, Virginia, a new domain of nuclear and subnuclear phenomena can be investigated, mainly through coincidence experiments. An overview of the characteristic features of the accelerator and associated experimental equipment is given. Some examples of the physics programs are briefly described.

  16. Acceleration of neutrons in a scheme of a tautochronous mathematical pendulum (physical principles)

    SciTech Connect

    Rivlin, Lev A

    2010-12-09

    We consider the physical principles of neutron acceleration through a multiple synchronous interaction with a gradient rf magnetic field in a scheme of a tautochronous mathematical pendulum. (laser applications and other aspects of quantum electronics)

  17. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    SciTech Connect

    Grimm, Simon L.; Stadel, Joachim G.

    2014-11-20

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  18. A portable platform for accelerated PIC codes and its application to GPUs using OpenACC

    NASA Astrophysics Data System (ADS)

    Hariri, F.; Tran, T. M.; Jocksch, A.; Lanti, E.; Progsch, J.; Messmer, P.; Brunner, S.; Gheller, C.; Villard, L.

    2016-10-01

    We present a portable platform, called PIC_ENGINE, for accelerating Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as Graphic Processing Units (GPUs). The aim of this development is efficient simulations on future exascale systems by allowing different parallelization strategies depending on the application problem and the specific architecture. To this end, this platform contains the basic steps of the PIC algorithm and has been designed as a test bed for different algorithmic options and data structures. Among the architectures that this engine can explore, particular attention is given here to systems equipped with GPUs. The study demonstrates that our portable PIC implementation based on the OpenACC programming model can achieve performance closely matching theoretical predictions. Using the Cray XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the one on an Intel Sandy bridge 8-core CPU by a factor of 3.4.

  19. Physics design of an accelerator for an accelerator-driven subcritical system

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Cheng, Peng; Geng, Huiping; Guo, Zhen; He, Yuan; Meng, Cai; Ouyang, Huafu; Pei, Shilun; Sun, Biao; Sun, Jilei; Tang, Jingyu; Yan, Fang; Yang, Yao; Zhang, Chuang; Yang, Zheng

    2013-08-01

    An accelerator-driven subcritical system (ADS) program was launched in China in 2011, which aims to design and build an ADS demonstration facility with the capability of more than 1000 MW thermal power in multiple phases lasting about 20 years. The driver linac is defined to be 1.5 GeV in energy, 10 mA in current and in cw operation mode. To meet the extremely high reliability and availability, the linac is designed with much installed margin and fault tolerance, including hot-spare injectors and local compensation method for key element failures. The accelerator complex consists of two parallel 10-MeV injectors, a joint medium-energy beam transport line, a main linac, and a high-energy beam transport line. The superconducting acceleration structures are employed except for the radio frequency quadrupole accelerators (RFQs) which are at room temperature. The general design considerations and the beam dynamics design of the driver linac complex are presented here.

  20. Muon simulation codes MUSIC and MUSUN for underground physics

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, V. A.

    2009-03-01

    The paper describes two Monte Carlo codes dedicated to muon simulations: MUSIC (MUon SImulation Code) and MUSUN (MUon Simulations UNderground). MUSIC is a package for muon transport through matter. It is particularly useful for propagating muons through large thickness of rock or water, for instance from the surface down to underground/underwater laboratory. MUSUN is designed to use the results of muon transport through rock/water to generate muons in or around underground laboratory taking into account their energy spectrum and angular distribution.

  1. Accelerator Preparations for Muon Physics Experiments at Fermilab

    SciTech Connect

    Syphers, M.J.; /Fermilab

    2009-10-01

    The use of existing Fermilab facilities to provide beams for two muon experiments - the Muon to Electron Conversion Experiment (Mu2e) and the New g-2 Experiment - is under consideration. Plans are being pursued to perform these experiments following the completion of the Tevatron Collider Run II, utilizing the beam lines and storage rings used today for antiproton accumulation without considerable reconfiguration. Operating scenarios being investigated and anticipated accelerator improvements or reconfigurations will be presented.

  2. Seeing the Nature of the Accelerating Physics: It's a SNAP

    SciTech Connect

    Albert, J.; Aldering, G.; Allam, S.; Althouse, W.; Amanullah, R.; Annis, J.; Astier, P.; Aumeunier, M.; Bailey, S.; Baltay, C.; Barrelet, E.; Basa, S.; Bebek, C.; Bergstom, L.; Bernstein, G.; Bester, M.; Besuner, B.; Bigelow, B.; Blandford, R.; Bohlin, R.; Bonissent, A.; /Caltech /LBL, Berkeley /Fermilab /SLAC /Stockholm U. /Paris, IN2P3 /Marseille, CPPM /Marseille, Lab. Astrophys. /Yale U. /Pennsylvania U. /UC, Berkeley /Michigan U. /Baltimore, Space Telescope Sci. /Indiana U. /Caltech, JPL /Australian Natl. U., Canberra /American Astron. Society /Chicago U. /Cambridge U. /Saclay /Lyon, IPN

    2005-08-05

    For true insight into the nature of dark energy, measurements of the precision and accuracy of the Supernova/Acceleration Probe (SNAP) are required. Precursor or scaled-down experiments are unavoidably limited, even for distinguishing the cosmological constant. They can pave the way for, but should not delay, SNAP by developing calibration, refinement, and systematics control (and they will also provide important, exciting astrophysics).

  3. Synergia: a modern tool for accelerator physics simulation

    SciTech Connect

    Spentzouris, P.; Amundson, J.; /Fermilab

    2004-10-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. Synergia is a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles.

  4. Physics of beam self-modulation in plasma wakefield accelerators

    SciTech Connect

    Lotov, K. V.

    2015-10-15

    The self-modulation instability is a key effect that makes possible the usage of nowadays proton beams as drivers for plasma wakefield acceleration. Development of the instability in uniform plasmas and in plasmas with a small density up-step is numerically studied with the focus at nonlinear stages of beam evolution. The step parameters providing the strongest established wakefield are found, and the mechanism of stable bunch train formation is identified.

  5. Estimation of photoneutron yield in linear accelerator with different collimation systems by Geant4 and MCNPX simulation codes

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Sang; Khazaei, Zeinab; Ko, Junho; Afarideh, Hossein; Ghergherehchi, Mitra

    2016-04-01

    At present, the bremsstrahlung photon beams produced by linear accelerators are the most commonly employed method of radiotherapy for tumor treatments. A photoneutron source based on three different energies (6, 10 and 15 MeV) of a linac electron beam was designed by means of Geant4 and Monte Carlo N-Particle eXtended (MCNPX) simulation codes. To obtain maximum neutron yield, two arrangements for the photo neutron convertor were studied: (a) without a collimator, and (b) placement of the convertor after the collimator. The maximum photon intensities in tungsten were 0.73, 1.24 and 2.07 photon/e at 6, 10 and 15 MeV, respectively. There was no considerable increase in the photon fluence spectra from 6 to 15 MeV at the optimum thickness between 0.8 mm and 2 mm of tungsten. The optimum dimensions of the collimator were determined to be a length of 140 mm with an aperture of 5 mm  ×  70 mm for iron in a slit shape. According to the neutron yield, the best thickness obtained for the studied materials was 30 mm. The number of neutrons generated in BeO achieved the maximum value at 6 MeV, unlike that in Be, where the highest number of neutrons was observed at 15 MeV. Statistical uncertainty in all simulations was less than 0.3% and 0.05% for MCNPX and the standard electromagnetic (EM) physics packages of Geant4, respectively. Differences among spectra in various regions are due to various cross-section and stopping power data and different simulations of the physics processes.

  6. Estimation of photoneutron yield in linear accelerator with different collimation systems by Geant4 and MCNPX simulation codes.

    PubMed

    Kim, Yoon Sang; Khazaei, Zeinab; Ko, Junho; Afarideh, Hossein; Ghergherehchi, Mitra

    2016-04-01

    At present, the bremsstrahlung photon beams produced by linear accelerators are the most commonly employed method of radiotherapy for tumor treatments. A photoneutron source based on three different energies (6, 10 and 15 MeV) of a linac electron beam was designed by means of Geant4 and Monte Carlo N-Particle eXtended (MCNPX) simulation codes. To obtain maximum neutron yield, two arrangements for the photo neutron convertor were studied: (a) without a collimator, and (b) placement of the convertor after the collimator. The maximum photon intensities in tungsten were 0.73, 1.24 and 2.07 photon/e at 6, 10 and 15 MeV, respectively. There was no considerable increase in the photon fluence spectra from 6 to 15 MeV at the optimum thickness between 0.8 mm and 2 mm of tungsten. The optimum dimensions of the collimator were determined to be a length of 140 mm with an aperture of 5 mm  ×  70 mm for iron in a slit shape. According to the neutron yield, the best thickness obtained for the studied materials was 30 mm. The number of neutrons generated in BeO achieved the maximum value at 6 MeV, unlike that in Be, where the highest number of neutrons was observed at 15 MeV. Statistical uncertainty in all simulations was less than 0.3% and 0.05% for MCNPX and the standard electromagnetic (EM) physics packages of Geant4, respectively. Differences among spectra in various regions are due to various cross-section and stopping power data and different simulations of the physics processes.

  7. Estimation of photoneutron yield in linear accelerator with different collimation systems by Geant4 and MCNPX simulation codes.

    PubMed

    Kim, Yoon Sang; Khazaei, Zeinab; Ko, Junho; Afarideh, Hossein; Ghergherehchi, Mitra

    2016-04-01

    At present, the bremsstrahlung photon beams produced by linear accelerators are the most commonly employed method of radiotherapy for tumor treatments. A photoneutron source based on three different energies (6, 10 and 15 MeV) of a linac electron beam was designed by means of Geant4 and Monte Carlo N-Particle eXtended (MCNPX) simulation codes. To obtain maximum neutron yield, two arrangements for the photo neutron convertor were studied: (a) without a collimator, and (b) placement of the convertor after the collimator. The maximum photon intensities in tungsten were 0.73, 1.24 and 2.07 photon/e at 6, 10 and 15 MeV, respectively. There was no considerable increase in the photon fluence spectra from 6 to 15 MeV at the optimum thickness between 0.8 mm and 2 mm of tungsten. The optimum dimensions of the collimator were determined to be a length of 140 mm with an aperture of 5 mm  ×  70 mm for iron in a slit shape. According to the neutron yield, the best thickness obtained for the studied materials was 30 mm. The number of neutrons generated in BeO achieved the maximum value at 6 MeV, unlike that in Be, where the highest number of neutrons was observed at 15 MeV. Statistical uncertainty in all simulations was less than 0.3% and 0.05% for MCNPX and the standard electromagnetic (EM) physics packages of Geant4, respectively. Differences among spectra in various regions are due to various cross-section and stopping power data and different simulations of the physics processes. PMID:26975304

  8. Inflationary Expansions Generated by a Physically Real Kinematic Acceleration

    NASA Astrophysics Data System (ADS)

    Savickas, David

    2010-02-01

    A repulsive cosmological acceleration is shown to exist that exhibits a behavior very similar to that found in both inflationary models at the time of origin of the universe, and also in the repulsive acceleration found in present-day cosmological observations. It is able to describe an inflationary model of a radiation universe in considerable numerical detail. It is based on a method that defines the Hubble parameter H, and consequently inertial systems themselves, directly in terms of the positions and velocities of mass particles in a universe. This makes it possible to describe a mass particle's motion relative to other particles in the universe, rather than relative to inertial systems. Because of this, the repulsive acceleration is a real kinematic effect existing in the present-day universe. This definition of H cannot include the use of photon positions or velocities because H determines the velocities of receding inertial systems of galaxies, and the velocity of a photon in a distant inertial system then depends on the definition of H itself. Therefore, at the time of its origin the magnitude of H in a radiation dominated universe would be solely determined by the behavior of the relatively few mass particles that it contained while allowing for a near balance with the gravitation of the Friedmann-Lemaître model. )

  9. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX.

    SciTech Connect

    Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division

    2009-06-09

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the

  10. Accelerator physics of the Stanford Linear Collider and SLC accelerator experiments towards the Next Linear Collider

    SciTech Connect

    Seeman, J.T.

    1992-06-01

    The Stanford Linear Collider (SLC) was built to collide single bunches of electrons and positrons head-on at a single interaction point with single beam energies up to 55 GeV. The small beam sizes and high currents required for high luminosity operation have significantly pushed traditional beam quality limits. The Polarized Electron Source produces about 8 {times} 10{sup 10} electrons in each of two bunches with up to 28% polarization,. The Damping Rings provide coupled invariant emittances of 1.8 {times} 10{sup {minus}5} r-m with 4.5 {times} 10{sup 10} particles per bunch. The 57 GeV Linac has successfully accelerated over 3 {times} 10{sup 10} particles with design invariant emittances of 3 {times} 10{sup {minus}5} r-m. Both longitudinal and transverse wakefields affect strongly the trajectory and emittance corrections used for operations. The Arc systems routinely transport decoupled and betatron matched beams. In the Final Focus, the beams are chromatically corrected and demagnified producing spot sizes of 2 to 3 {mu}m at the focal point. Spot sizes below 2 {mu}m have been made during special tests. Instrumentation and feedback systems are well advanced, providing continuous beam monitoring and pulse-by-pulse control. A luminosity of 1.6 {times} 10{sup 29} cm{sup {minus}2}sec{sup {minus}1} has been produced. Several experimental tests for a Next Linear Collider (NLC) are being planned or constructed using the SLC accelerator as a test facility. The Final Focus Test Beam will demagnify a flat 50 GeV electron beam to dimensions near 60 nm vertically and 900 nm horizontally. A potential Emittance Dynamics Test Area has the capability to test the acceleration and transport of very low emittance beams, the compression of bunch lengths to 50 {mu}m, the acceleration and control of multiple bunches, and the properties of wakefields in the very short bunch length regime.

  11. Laser-based acceleration for nuclear physics experiments at ELI-NP

    NASA Astrophysics Data System (ADS)

    Tesileanu, O.; Asavei, Th.; Dancus, I.; Gales, S.; Negoita, F.; Turcu, I. C. E.; Ursescu, D.; Zamfir, N. V.

    2016-05-01

    As part of the Extreme Light pan-European research infrastructure, Extreme Light Infrastructure - Nuclear Physics (ELI-NP) in Romania will focus on topics in Nuclear Physics, fundamental Physics and applications, based on very intense photon beams. Laser-based acceleration of electrons, protons and heavy ions is a prerequisite for a multitude of laser-driven nuclear physics experiments already proposed by the international research community. A total of six outputs of the dual-amplification chain laser system, two of 100TW, two of 1PW and two of 10PW will be employed in 5 experimental areas, with the possibility to use long and short focal lengths, gas and solid targets, reaching the whole range of laser acceleration processes. We describe the main techniques and expectations regarding the acceleration of electrons, protons and heavy nuclei at ELI-NP, and some physics cases for which these techniques play an important role in the experiments.

  12. Physics at the Thomas Jefferson National Accelerator Facility

    SciTech Connect

    Lawrence Cardman

    2005-10-22

    The CEBAF accelerator at JLab is fulfilling its scientific mission to understand how hadrons are constructed from the quarks and gluons of QCD, to understand the QCD basis for the nucleon-nucleon force, and to explore the transition from the nucleon-meson to a QCD description. Its success is based on the firm foundation of experimental and theoretical techniques developed world-wide over the past few decades, on complementary data provided by essential lower-energy facilities, such as MAMI, and on the many insights provided by the scientists we are gathered here to honor.

  13. Atomic Structure Calculations from the Los Alamos Atomic Physics Codes

    DOE Data Explorer

    Cowan, R. D.

    The well known Hartree-Fock method of R.D. Cowan, developed at Los Alamos National Laboratory, is used for the atomic structure calculations. Electron impact excitation cross sections are calculated using either the distorted wave approximation (DWA) or the first order many body theory (FOMBT). Electron impact ionization cross sections can be calculated using the scaled hydrogenic method developed by Sampson and co-workers, the binary encounter method or the distorted wave method. Photoionization cross sections and, where appropriate, autoionizations are also calculated. Original manuals for the atomic structure code, the collisional excitation code, and the ionization code, are available from this website. Using the specialized interface, you will be able to define the ionization stage of an element and pick the initial and final configurations. You will be led through a series of web pages ending with a display of results in the form of cross sections, collision strengths or rates coefficients. Results are available in tabular and graphic form.

  14. Comparing Participants' Rating and Compendium Coding to Estimate Physical Activity Intensities

    ERIC Educational Resources Information Center

    Masse, Louise C.; Eason, Karen E.; Tortolero, Susan R.; Kelder, Steven H.

    2005-01-01

    This study assessed agreement between participants' rating (PMET) and compendium coding (CMET) of estimating physical activity intensity in a population of older minority women. As part of the Women on the Move study, 224 women completed a 7-day activity diary and wore an accelerometer for 7 days. All activities recorded were coded using PMET and…

  15. Accelerating Innovation: How Nuclear Physics Benefits Us All

    SciTech Connect

    Not Available

    2011-01-01

    From fighting cancer to assuring food is safe to protecting our borders, nuclear physics impacts the lives of people around the globe every day. In learning about the nucleus of the atom and the forces that govern it, scientists develop a depth of knowledge, techniques and remarkable research tools that can be used to develop a variety of often unexpected, practical applications. These applications include devices and technologies for medical diagnostics and therapy, energy production and exploration, safety and national security, and for the analysis of materials and environmental contaminants. This brochure by the Office of Nuclear Physics of the USDOE Office of Science discusses nuclear physics and ways in which its applications fuel our economic vitality, and make the world and our lives safer and healthier.

  16. Hadron physics at the new CW electron accelerators

    SciTech Connect

    Burkert, V.D.

    1990-01-01

    Major trends of the physics program related to the study of hadron structure and hadron spectroscopy at the new high current, high duty cycle electron machines are discussed. It is concluded that planned experiments at these machines may have important impact on our understanding of the strong interaction by studying the internal structure and spectroscopy of the nucleon and lower mass hyperon states.

  17. Proceedings of the workshop on B physics at hadron accelerators

    SciTech Connect

    McBride, P.; Mishra, C.S.

    1993-12-31

    This report contains papers on the following topics: Measurement of Angle {alpha}; Measurement of Angle {beta}; Measurement of Angle {gamma}; Other B Physics; Theory of Heavy Flavors; Charged Particle Tracking and Vertexing; e and {gamma} Detection; Muon Detection; Hadron ID; Electronics, DAQ, and Computing; and Machine Detector Interface. Selected papers have been indexed separately for inclusion the in Energy Science and Technology Database.

  18. 'Accelerators and Beams,' multimedia computer-based training in accelerator physics

    SciTech Connect

    Silbar, R. R.; Browman, A. A.; Mead, W. C.; Williams, R. A.

    1999-06-10

    We are developing a set of computer-based tutorials on accelerators and charged-particle beams under an SBIR grant from the DOE. These self-paced, interactive tutorials, available for Macintosh and Windows platforms, use multimedia techniques to enhance the user's rate of learning and length of retention of the material. They integrate interactive 'On-Screen Laboratories,' hypertext, line drawings, photographs, two- and three-dimensional animations, video, and sound. They target a broad audience, from undergraduates or technicians to professionals. Presently, three modules have been published (Vectors, Forces, and Motion), a fourth (Dipole Magnets) has been submitted for review, and three more exist in prototype form (Quadrupoles, Matrix Transport, and Properties of Charged-Particle Beams). Participants in the poster session will have the opportunity to try out these modules on a laptop computer.

  19. From electron maps to acceleration models in the physics of flare

    NASA Astrophysics Data System (ADS)

    Massone, Anna Maria

    Electron maps reconstructed from RHESSI visibilities represent a powerful source of information for constraining models of electron acceleration in solar plasma physics during flaring events. In this talk I will describe how and to which extent electron maps can be utilized to estimate local electron spectral indices, the evolution of centroid position at different energies in the electron space and the compatibility of RHESSI observations with different theoretical models for the acceleration mechanisms.

  20. High energy physics advisory panel`s composite subpanel for the assessment of the status of accelerator physics and technology

    SciTech Connect

    1996-05-01

    In November 1994, Dr. Martha Krebs, Director of the US Department of Energy (DOE) Office of Energy Research (OER), initiated a broad assessment of the current status and promise of the field of accelerator physics and technology with respect to five OER programs -- High Energy Physics, Nuclear Physics, Basic Energy Sciences, Fusion Energy, and Health and Environmental Research. Dr. Krebs asked the High Energy Physics Advisory Panel (HEPAP) to establish a composite subpanel with representation from the five OER advisory committees and with a balance of membership drawn broadly from both the accelerator community and from those scientific disciplines associated with the OER programs. The Subpanel was also charged to provide recommendations and guidance on appropriate future research and development needs, management issues, and funding requirements. The Subpanel finds that accelerator science and technology is a vital and intellectually exciting field. It has provided essential capabilities for the DOE/OER research programs with an enormous impact on the nation`s scientific research, and it has significantly enhanced the nation`s biomedical and industrial capabilities. Further progress in this field promises to open new possibilities for the scientific goals of the OER programs and to further benefit the nation. Sustained support of forefront accelerator research and development by the DOE`s OER programs and the DOE`s predecessor agencies has been responsible for much of this impact on research. This report documents these contributions to the DOE energy research mission and to the nation.

  1. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  2. RELATIONSHIPS BETWEEN GIS ENVIRONMENTAL FEATURES AND ADOLESCENT MALE PHYSICAL ACTIVITY: GIS CODING DIFFERENCES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: It is not clear if relationships between GIS obtained environmental features and physical activity differ according to the method used to code GIS data. Methods: Physical activity levels of 210 Boy Scouts were measured by accelerometer. Numbers of parks, trails, gymnasia, bus stops, groc...

  3. Physics design and scaling of recirculating induction accelerators: from benchtop prototypes to drivers

    SciTech Connect

    Barnard, J.J.; Cable, M.D.; Callahan, D.A.

    1996-02-06

    Recirculating induction accelerators (recirculators) have been investigated as possible drivers for inertial fusion energy production because of their potential cost advantage over linear induction accelerators. Point designs were obtained and many of the critical physics and technology issues that would need to be addressed were detailed. A collaboration involving Lawrence Livermore National Laboratory and Lawrence Berkeley National Laboratory researchers is now developing a small prototype recirculator in order to demonstrate an understanding of nearly all of the critical beam dynamics issues that have been raised. We review the design equations for recirculators and demonstrate how, by keeping crucial dimensionless quantities constant, a small prototype recirculator was designed which will simulate the essential beam physics of a driver. We further show how important physical quantities such as the sensitivity to errors of optical elements (in both field strength and placement), insertion/extraction, vacuum requirements, and emittance growth, scale from small-prototype to driver-size accelerator.

  4. ACCELERATOR PHYSICS ISSUES FOR FUTURE ELECTRON ION COLLIDERS.

    SciTech Connect

    PEGGS,S.; BEN-ZVI,I.; KEWISCH,J.; MURPHY,J.

    2001-06-18

    Interest continues to grow in the physics of collisions between electrons and heavy ions, and between polarized electrons and polarized protons [1,2,3]. Table 1 compares the parameters of some machines under discussion. DESY has begun to explore the possibility of upgrading the existing HERA-p ring to store heavy ions, in order to collide them with electrons (or positrons) in the HERA-e ring, or from TESLA [4]. An upgrade to store polarized protons in the HERA-p ring is also under discussion [1]. BNL is considering adding polarized electrons to the RHIC repertoire, which already includes heavy and light ions, and polarized protons. The authors of this paper have made a first pass analysis of this ''eRHIC'' possibility [5]. MIT-BATES is also considering electron ion collider designs [6].

  5. James Clerk Maxwell Prize for Plasma Physics: The Physics of Magnetic Reconnection and Associated Particle Acceleration

    NASA Astrophysics Data System (ADS)

    Drake, James

    2010-11-01

    Solar and stellar flares, substorms in the Earth's magnetosphere, and disruptions in laboratory fusion experiments are driven by the explosive release of magnetic energy through the process of magnetic reconnection. During reconnection oppositely directed magnetic fields break and cross-connect. The resulting magnetic slingshots convert magnetic energy into high velocity flows, thermal energy and energetic particles. A major scientific challenge has been the multi-scale nature of the problem: a narrow boundary layer, ``the dissipation region,'' breaks field lines and controls the release of energy in a macroscale system. Significant progress has been made on fundamental questions such as how magnetic energy is released so quickly and why the release occurs as an explosion. At the small spatial scales of the dissipation region the motion of electrons and ions decouples, the MHD description breaks down and whistler and kinetic Alfven dynamics drives reconnection. The dispersive property of these waves leads to fast reconnection, insensitive to system size and weakly dependent on dissipation, consistent with observations. The evidence for these waves during reconnection in the magnetosphere and the laboratory is compelling. The role of turbulence within the dissipation region in the form of ``secondary islands'' or as a source of anomalous resistivity continues to be explored. A large fraction of the magnetic energy released during reconnection appears in the form of energetic electrons and protons -- up to 50% or more during solar flares. The mechanism for energetic particle production during magnetic reconnection has remained a mystery. Models based on reconnection at a single large x-line are incapable of producing the large numbers of energetic electrons seen in observations. Scenarios based on particle acceleration in a multi-x-line environment are more promising. In such models a link between the energy gain of electrons and the magnetic energy released, a

  6. Utility subroutine package used by Applied Physics Division export codes. [LMFBR

    SciTech Connect

    Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.

    1983-04-01

    This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers.

  7. Pyroelectric Crystal Accelerator In The Department Of Physics And Nuclear Engineering At West Point

    NASA Astrophysics Data System (ADS)

    Gillich, Don; Shannon, Mike; Kovanen, Andrew; Anderson, Tom; Bright, Kevin; Edwards, Ronald; Danon, Yaron; Moretti, Brian; Musk, Jeffrey

    2011-06-01

    The Nuclear Science and Engineering Research Center (NSERC), a Defense Threat Reduction Agency (DTRA) office located at the United States Military Academy (USMA), sponsors and manages cadet and faculty research in support of DTRA objectives. The NSERC has created an experimental pyroelectric crystal accelerator program to enhance undergraduate education at USMA in the Department of Physics and Nuclear Engineering. This program provides cadets with hands-on experience in designing their own experiments using an inexpensive tabletop accelerator. This device uses pyroelectric crystals to ionize and accelerate gas ions to energies of ˜100 keV. Within the next year, cadets and faculty at USMA will use this device to create neutrons through the deuterium-deuterium (D-D) fusion process, effectively creating a compact, portable neutron generator. The double crystal pyroelectric accelerator will also be used by students to investigate neutron, x-ray, and ion spectroscopy.

  8. Articulated Multimedia Physics, Lesson 6, Uniformly Accelerated Motion of Bodies Starting From Rest.

    ERIC Educational Resources Information Center

    New York Inst. of Tech., Old Westbury.

    As the sixth lesson of the Articulated Multimedia Physics Course, instructional materials are presented in this study guide with relation to the uniformly accelerated motion of bodies starting from rest. The objective is to teach students how a complete set of equations of motion is derived and how to use them. Free falling bodies near the Earth's…

  9. Using a mobile phone acceleration sensor in physics experiments on free and damped harmonic oscillations

    NASA Astrophysics Data System (ADS)

    Carlos Castro-Palacio, Juan; Velázquez-Abad, Luisberis; Giménez, Marcos H.; Monsoriu, Juan A.

    2013-06-01

    We have used a mobile phone acceleration sensor, and the Accelerometer Monitor application for Android, to collect data in physics experiments on free and damped oscillations. Results for the period, frequency, spring constant, and damping constant agree very well with measurements obtained by other methods. These widely available sensors are likely to find increased use in instructional laboratories.

  10. Physical Interpretation of the Schott Energy of An Accelerating Point Charge and the Question of Whether a Uniformly Accelerating Charge Radiates

    ERIC Educational Resources Information Center

    Rowland, David R.

    2010-01-01

    A core topic in graduate courses in electrodynamics is the description of radiation from an accelerated charge and the associated radiation reaction. However, contemporary papers still express a diversity of views on the question of whether or not a uniformly accelerating charge radiates suggesting that a complete "physical" understanding of the…

  11. Osiris: A Modern, High-Performance, Coupled, Multi-Physics Code For Nuclear Reactor Core Analysis

    SciTech Connect

    Procassini, R J; Chand, K K; Clouse, C J; Ferencz, R M; Grandy, J M; Henshaw, W D; Kramer, K J; Parsons, I D

    2007-02-26

    To meet the simulation needs of the GNEP program, LLNL is leveraging a suite of high-performance codes to be used in the development of a multi-physics tool for modeling nuclear reactor cores. The Osiris code project, which began last summer, is employing modern computational science techniques in the development of the individual physics modules and the coupling framework. Initial development is focused on coupling thermal-hydraulics and neutral-particle transport, while later phases of the project will add thermal-structural mechanics and isotope depletion. Osiris will be applicable to the design of existing and future reactor systems through the use of first-principles, coupled physics models with fine-scale spatial resolution in three dimensions and fine-scale particle-energy resolution. Our intent is to replace an existing set of legacy, serial codes which require significant approximations and assumptions, with an integrated, coupled code that permits the design of a reactor core using a first-principles physics approach on a wide range of computing platforms, including the world's most powerful parallel computers. A key research activity of this effort deals with the efficient and scalable coupling of physics modules which utilize rather disparate mesh topologies. Our approach allows each code module to use a mesh topology and resolution that is optimal for the physics being solved, and employs a mesh-mapping and data-transfer module to effect the coupling. Additional research is planned in the area of scalable, parallel thermal-hydraulics, high-spatial-accuracy depletion and coupled-physics simulation using Monte Carlo transport.

  12. Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study

    PubMed Central

    Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.

    2012-01-01

    Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242

  13. The Accuracy of ICD Codes: Identifying Physical Abuse in 4 Children’s Hospitals

    PubMed Central

    Hooft, Anneka M.; Asnes, Andrea G.; Livingston, Nina; Deutsch, Stephanie; Cahill, Linda; Wood, Joanne N.; Leventhal, John M.

    2016-01-01

    Objective To assess the accuracy of International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), codes in identifying cases of child physical abuse in 4 children’s hospitals. Methods We included all children evaluated by a child abuse pediatrician (CAP) for suspicion of abuse at 4 children’s hospitals from January 1, 2007, to December 31, 2010. Subjects included both patients judged to have injuries from abuse and those judged to have injuries from accidents or to have medical problems. The ICD-9-CM codes entered in the hospital discharge database for each child were compared to the decisions made by the CAPs on the likelihood of abuse. Sensitivity and specificity were calculated. Medical records for discordant cases were abstracted and reviewed to assess factors contributing to coding discrepancies. Results Of 936 cases of suspected physical abuse, 65.8% occurred in children <1 year of age. CAPs rated 32.7% as abuse, 18.2% as unknown cause, and 49.1% as accident/medical cause. Sensitivity and specificity of ICD-9-CM codes for abuse were 73.5% (95% confidence interval 68.2, 78.4), and 92.4% (95% confidence interval 90.0, 94.0), respectively. Among hospitals, sensitivity ranged from 53.8% to 83.8% and specificity from 85.4% to 100%. Analysis of discordant cases revealed variations in coding practices and physicians’ notations among hospitals that contributed to differences in sensitivity and specificity of ICD-9-CM codes in child physical abuse. Conclusions Overall, the sensitivity and specificity of ICD-9-CM codes in identifying cases of child physical abuse were relatively low, suggesting both an under- and overcounting of abuse cases. PMID:26142071

  14. Hardware acceleration of PIC codes: tapping into the power of state of the art processing units

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Abreu, P.; Martins, S. F.; Silva, L. O.

    2008-11-01

    There are many astrophysical and laboratory scenarios where kinetic effects play an important role. Further understanding of these scenarios requires detailed numerical modeling using fully relativistic three-dimensional kinetic code such as OSIRIS [1]. However, these codes are computationally heavy. Explicitly using available hardware resources such as SIMD units (Altivec/SSE3) [2], cell processors or graphics processing units (GPUs) may allow us to significantly boost performance of these codes. For the most cases, the processing units are limited to single precision arithmetic, and require specific C/C++ code to be used. We present a comparison between double precision and single precision results, focusing both on performance and on the effects on the simulation in terms of algorithm properties. Details on a framework allowing the integration of hardware optimized routines with existing high performance codes in languages other than C is given. Finally, initial results of high performance modules of the PIC algorithm using SIMD units and GPU's will also be presented. [1] R. A. Fonseca et al., LNCS 2331, 342, (2002) [2] K. J. Bowers et al., Phys Plasmas vol. 15 (5) pp. 055703 (2008)

  15. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  16. {open_quotes}Accelerators and Beams,{close_quotes} multimedia computer-based training in accelerator physics

    SciTech Connect

    Silbar, R.R.; Browman, A.A.; Mead, W.C.; Williams, R.A.

    1999-06-01

    We are developing a set of computer-based tutorials on accelerators and charged-particle beams under an SBIR grant from the DOE. These self-paced, interactive tutorials, available for Macintosh and Windows platforms, use multimedia techniques to enhance the user{close_quote}s rate of learning and length of retention of the material. They integrate interactive {open_quotes}On-Screen Laboratories,{close_quotes} hypertext, line drawings, photographs, two- and three-dimensional animations, video, and sound. They target a broad audience, from undergraduates or technicians to professionals. Presently, three modules have been published ({ital Vectors, Forces}, and {ital Motion}), a fourth ({ital Dipole Magnets}) has been submitted for review, and three more exist in prototype form ({ital Quadrupoles, Matrix Transport}, and {ital Properties of Charged-Particle Beams}). Participants in the poster session will have the opportunity to try out these modules on a laptop computer. {copyright} {ital 1999 American Institute of Physics.}

  17. Accelerator-based techniques for the support of senior-level undergraduate physics laboratories

    NASA Astrophysics Data System (ADS)

    Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.

    2001-07-01

    Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.

  18. Two-dimensional spatiotemporal coding of linear acceleration in vestibular nuclei neurons

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Bush, G. A.; Perachio, A. A.

    1993-01-01

    Response properties of vertical (VC) and horizontal (HC) canal/otolith-convergent vestibular nuclei neurons were studied in decerebrate rats during stimulation with sinusoidal linear accelerations (0.2-1.4 Hz) along different directions in the head horizontal plane. A novel characteristic of the majority of tested neurons was the nonzero response often elicited during stimulation along the "null" direction (i.e., the direction perpendicular to the maximum sensitivity vector, Smax). The tuning ratio (Smin gain/Smax gain), a measure of the two-dimensional spatial sensitivity, depended on stimulus frequency. For most vestibular nuclei neurons, the tuning ratio was small at the lowest stimulus frequencies and progressively increased with frequency. Specifically, HC neurons were characterized by a flat Smax gain and an approximately 10-fold increase of Smin gain per frequency decade. Thus, these neurons encode linear acceleration when stimulated along their maximum sensitivity direction, and the rate of change of linear acceleration (jerk) when stimulated along their minimum sensitivity direction. While the Smax vectors were distributed throughout the horizontal plane, the Smin vectors were concentrated mainly ipsilaterally with respect to head acceleration and clustered around the naso-occipital head axis. The properties of VC neurons were distinctly different from those of HC cells. The majority of VC cells showed decreasing Smax gains and small, relatively flat, Smin gains as a function of frequency. The Smax vectors were distributed ipsilaterally relative to the induced (apparent) head tilt. In type I anterior or posterior VC neurons, Smax vectors were clustered around the projection of the respective ipsilateral canal plane onto the horizontal head plane. These distinct spatial and temporal properties of HC and VC neurons during linear acceleration are compatible with the spatiotemporal organization of the horizontal and the vertical/torsional ocular responses

  19. Accelerator physics and technology challenges of very high energy hadron colliders

    DOE PAGES

    Shiltsev, Vladimir D.

    2015-08-20

    High energy hadron colliders have been in the forefront of particle physics for more than three decades. At present, international particle physics community considers several options for a 100 TeV proton–proton collider as a possible post-LHC energy frontier facility. The method of colliding beams has not fully exhausted its potential but has slowed down considerably in its progress. This article briefly reviews the accelerator physics and technology challenges of the future very high energy colliders and outlines the areas of required research and development towards their technical and financial feasibility.

  20. Induction-accelerator heavy-ion fusion: Status and beam physics issues

    SciTech Connect

    Friedman, A.

    1996-01-26

    Inertial confinement fusion driven by beams of heavy ions is an attractive route to controlled fusion. In the U.S., induction accelerators are being developed as {open_quotes}drivers{close_quotes} for this process. This paper is divided into two main sections. In the first section, the concept of induction-accelerator driven heavy-ion fusion is briefly reviewed, and the U.S. program of experiments and theoretical investigations is described. In the second, a {open_quotes}taxonomy{close_quotes} of space-charge-dominated beam physics issues is presented, accompanied by a brief discussion of each area.

  1. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body.

    PubMed

    Arif, Muhammad; Kattan, Ahmed

    2015-01-01

    Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects' wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state). Their ages were 27.2 ± 3.3 years and their body mass index (BMI) is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value) and recall (sensitivity) of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients.

  2. Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body

    PubMed Central

    2015-01-01

    Monitoring physical activities by using wireless sensors is helpful for identifying postural orientation and movements in the real-life environment. A simple and robust method based on time domain features to identify the physical activities is proposed in this paper; it uses sensors placed on the subjects’ wrist, chest and ankle. A feature set based on time domain characteristics of the acceleration signal recorded by acceleration sensors is proposed for the classification of twelve physical activities. Nine subjects performed twelve different types of physical activities, including sitting, standing, walking, running, cycling, Nordic walking, ascending stairs, descending stairs, vacuum cleaning, ironing clothes and jumping rope, and lying down (resting state). Their ages were 27.2 ± 3.3 years and their body mass index (BMI) is 25.11 ± 2.6 Kg/m2. Classification results demonstrated a high validity showing precision (a positive predictive value) and recall (sensitivity) of more than 95% for all physical activities. The overall classification accuracy for a combined feature set of three sensors is 98%. The proposed framework can be used to monitor the physical activities of a subject that can be very useful for the health professional to assess the physical activity of healthy individuals as well as patients. PMID:26203909

  3. Use of color-coded sleeve shutters accelerates oscillograph channel selection

    NASA Technical Reports Server (NTRS)

    Bouchlas, T.; Bowden, F. W.

    1967-01-01

    Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.

  4. Development of a GPU-Accelerated 3-D Full-Wave Code for Reflectometry Simulations

    NASA Astrophysics Data System (ADS)

    Reuther, K. S.; Kubota, S.; Feibush, E.; Johnson, I.

    2013-10-01

    1-D and 2-D full-wave codes used as synthetic diagnostics in microwave reflectometry are standard tools for understanding electron density fluctuations in fusion plasmas. The accuracy of the code depends on how well the wave properties along the ignored dimensions can be pre-specified or neglected. In a toroidal magnetic geometry, such assumptions are never strictly correct and ray tracing has shown that beam propagation is inherently a 3-D problem. Previously, we reported on the application of GPGPU's (General-Purpose computing on Graphics Processing Units) to a 2-D FDTD (Finite-Difference Time-Domain) code ported to utilize the parallel processing capabilities of the NVIDIA C870 and C1060. Here, we report on the development of a FDTD code for 3-D problems. Initial tests will use NVIDIA's M2070 GPU and concentrate on the launching and propagation of Gaussian beams in free space. If available, results using a plasma target will also be presented. Performance will be compared with previous generations of GPGPU cards as well as with NVIDIA's newest K20C GPU. Finally, the possibility of utilizing multiple GPGPU cards in a cluster environment or in a single node will also be discussed. Supported by U.S. DoE Grants DE-FG02-99-ER54527 and DE-AC02-09CH11466 and the DoE National Undergraduate Fusion Fellowship.

  5. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  6. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  7. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  8. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  9. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  10. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  11. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  12. Future accelerators (?)

    SciTech Connect

    John Womersley

    2003-08-21

    I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.

  13. European Code against Cancer 4th Edition: Physical activity and cancer.

    PubMed

    Leitzmann, Michael; Powers, Hilary; Anderson, Annie S; Scoccianti, Chiara; Berrino, Franco; Boutron-Ruault, Marie-Christine; Cecchini, Michele; Espina, Carolina; Key, Timothy J; Norat, Teresa; Wiseman, Martin; Romieu, Isabelle

    2015-12-01

    Physical activity is a complex, multidimensional behavior, the precise measurement of which is challenging in free-living individuals. Nonetheless, representative survey data show that 35% of the European adult population is physically inactive. Inadequate levels of physical activity are disconcerting given substantial epidemiologic evidence showing that physical activity is associated with decreased risks of colon, endometrial, and breast cancers. For example, insufficient physical activity levels are thought to cause 9% of breast cancer cases and 10% of colon cancer cases in Europe. By comparison, the evidence for a beneficial effect of physical activity is less consistent for cancers of the lung, pancreas, ovary, prostate, kidney, and stomach. The biologic pathways underlying the association between physical activity and cancer risk are incompletely defined, but potential etiologic pathways include insulin resistance, growth factors, adipocytokines, steroid hormones, and immune function. In recent years, sedentary behavior has emerged as a potential independent determinant of cancer risk. In cancer survivors, physical activity has shown positive effects on body composition, physical fitness, quality of life, anxiety, and self-esteem. Physical activity may also carry benefits regarding cancer survival, but more evidence linking increased physical activity to prolonged cancer survival is needed. Future studies using new technologies - such as accelerometers and e-tools - will contribute to improved assessments of physical activity. Such advancements in physical activity measurement will help clarify the relationship between physical activity and cancer risk and survival. Taking the overall existing evidence into account, the fourth edition of the European Code against Cancer recommends that people be physically active in everyday life and limit the time spent sitting. PMID:26187327

  14. Physical activity recognition based on rotated acceleration data using quaternion in sedentary behavior: a preliminary study.

    PubMed

    Shin, Y E; Choi, W H; Shin, T M

    2014-01-01

    This paper suggests a physical activity assessment method based on quaternion. To reduce user inconvenience, we measured the activity using a mobile device which is not put on fixed position. Recognized results were verified with various machine learning algorithms, such as neural network (multilayer perceptron), decision tree (J48), SVM (support vector machine) and naive bayes classifier. All algorithms have shown over 97% accuracy including decision tree (J48), which recognized the activity with 98.35% accuracy. As a result, physical activity assessment method based on rotated acceleration using quaternion can classify sedentary behavior with more accuracy without considering devices' position and orientation. PMID:25571109

  15. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  16. Multicore and Accelerator Development for a Leadership-Class Stellar Astrophysics Code

    SciTech Connect

    Messer, Bronson; Harris, James A; Parete-Koon, Suzanne T; Chertkow, Merek A

    2013-01-01

    We describe recent development work on the core-collapse supernova code CHIMERA. CHIMERA has consumed more than 100 million cpu-hours on Oak Ridge Leadership Computing Facility (OLCF) platforms in the past 3 years, ranking it among the most important applications at the OLCF. Most of the work described has been focused on exploiting the multicore nature of the current platform (Jaguar) via, e.g., multithreading using OpenMP. In addition, we have begun a major effort to marshal the computational power of GPUs with CHIMERA. The impending upgrade of Jaguar to Titan a 20+ PF machine with an NVIDIA GPU on many nodes makes this work essential.

  17. Research on acceleration method of reactor physics based on FPGA platforms

    SciTech Connect

    Li, C.; Yu, G.; Wang, K.

    2013-07-01

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)

  18. The GENGA Code: Gravitational Encounters in N-body simulations with GPU Acceleration.

    NASA Astrophysics Data System (ADS)

    Grimm, Simon; Stadel, Joachim

    2013-07-01

    We present a GPU (Graphics Processing Unit) implementation of a hybrid symplectic N-body integrator based on the Mercury Code (Chambers 1999), which handles close encounters with a very good energy conservation. It uses a combination of a mixed variable integration (Wisdom & Holman 1991) and a direct N-body Bulirsch-Stoer method. GENGA is written in CUDA C and runs on NVidia GPU's. The GENGA code supports three simulation modes: Integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. To achieve the best performance, GENGA runs completely on the GPU, where it can take advantage of the very fast, but limited, memory that exists there. All operations are performed in parallel, including the close encounter detection and grouping independent close encounter pairs. Compared to Mercury, GENGA runs up to 30 times faster. Two applications of GENGA are presented: First, the dynamics of planetesimals and the late stage of rocky planet formation due to planetesimal collisions. Second, a dynamical stability analysis of an exoplanetary system with an additional hypothetical super earth, which shows that in some multiple planetary systems, additional super earths could exist without perturbing the dynamical stability of the other planets (Elser et al. 2013).

  19. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannicchio, D.; Trovati, S.; Villari, R.; Wilson, T.

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  20. The FLUKA Code: an Overview

    SciTech Connect

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M.V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P.R.; /Milan U. /INFN, Milan /Pavia U. /INFN, Pavia /CERN /Siegen U. /Houston U. /SLAC /Frascati /NASA, Houston /ENEA, Frascati

    2005-11-09

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  1. Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code

    PubMed Central

    Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza

    2013-01-01

    The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique. PMID:24672765

  2. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    NASA Technical Reports Server (NTRS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  3. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  4. Development of a new lattice physics code robin for PWR application

    SciTech Connect

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhanced neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)

  5. User`s guide and physics manual for the SCATPlus circuit code

    SciTech Connect

    Yapuncich, M.L.; Deninger, W.J.; Gribble, R.F.

    1994-05-09

    ScatPlus is a user friendly circuit code and an expandable library of circuit models for electrical components and devices; it can be used to predict the transient behavior in electric circuits. The heart of ScatPlus is the transient circuit solver SCAT written in 1986 by R.F. Gribble. This manual includes system requirements, physics manual, ScatPlus component library, tutorial, ScatPlus screen, menus and toolbar, ScatPlus tool bar, procedures.

  6. Construction of large-scale simulation codes using ALPAL (A Livermore Physics Applications Language)

    SciTech Connect

    Cook, G.

    1990-10-01

    A Livermore Physics Applications Language (ALPAL) is a new computer tool that is designed to leverage the abilities and creativity of computational scientist. Some of the ways that ALPAL provides this leverage are: first, it eliminates many sources of errors; second, it permits building code modules with far greater speed than is otherwise possible; third, it provides a means of specifying almost any numerical algorithm; and fourth, it is a language that is close to a journal-style presentation of physics models and numerical methods for solving them. 13 refs., 9 figs.

  7. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    SciTech Connect

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  8. QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas

    SciTech Connect

    Huang, C. . E-mail: huangck@ee.ucla.edu; Decyk, V.K.; Ren, C.; Zhou, M.; Lu, W.; Mori, W.B.; Cooley, J.H.; Antonsen, T.M.; Katsouleas, T.

    2006-09-20

    A highly efficient, fully parallelized, fully relativistic, three-dimensional particle-in-cell model for simulating plasma and laser wakefield acceleration is described. The model is based on the quasi-static or frozen field approximation, which reduces a fully three-dimensional electromagnetic field solve and particle push to a two-dimensional field solve and particle push. This is done by calculating the plasma wake assuming that the drive beam and/or laser does not evolve during the time it takes for it to pass a plasma particle. The complete electromagnetic fields of the plasma wake and its associated index of refraction are then used to evolve the drive beam and/or laser using very large time steps. This algorithm reduces the computational time by 2-3 orders of magnitude. Comparison between the new algorithm and conventional fully explicit models (OSIRIS) is presented. The agreement is excellent for problems of interest. Direction for future work is also presented.

  9. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    SciTech Connect

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Serianni, G.; Nakano, H.; Takeiri, Y.; Tsumori, K.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  10. Unobtrusive heart rate estimation during physical exercise using photoplethysmographic and acceleration data.

    PubMed

    Mullan, Patrick; Kanzler, Christoph M; Lorch, Benedikt; Schroeder, Lea; Winkler, Ludwig; Laich, Larissa; Riedel, Frederik; Richer, Robert; Luckner, Christoph; Leutheuser, Heike; Eskofier, Bjoern M; Pasluosta, Cristian

    2015-08-01

    Photoplethysmography (PPG) is a non-invasive, inexpensive and unobtrusive method to achieve heart rate monitoring during physical exercises. Motion artifacts during exercise challenge the heart rate estimation from wrist-type PPG signals. This paper presents a methodology to overcome these limitation by incorporating acceleration information. The proposed algorithm consisted of four stages: (1) A wavelet based denoising, (2) an acceleration based denoising, (3) a frequency based approach to estimate the heart rate followed by (4) a postprocessing step. Experiments with different movement types such as running and rehabilitation exercises were used for algorithm design and development. Evaluation of our heart rate estimation showed that a mean absolute error 1.96 bpm (beats per minute) with standard deviation of 2.86 bpm and a correlation of 0.98 was achieved with our method. These findings suggest that the proposed methodology is robust to motion artifacts and is therefore applicable for heart rate monitoring during sports and rehabilitation. PMID:26737687

  11. Accelerated Evolution of Schistosome Genes Coding for Proteins Located at the Host–Parasite Interface

    PubMed Central

    Philippsen, Gisele S.; Wilson, R. Alan; DeMarco, Ricardo

    2015-01-01

    Study of proteins located at the host–parasite interface in schistosomes might provide clues about the mechanisms utilized by the parasite to escape the host immune system attack. Micro-exon gene (MEG) protein products and venom allergen-like (VAL) proteins have been shown to be present in schistosome secretions or associated with glands, which led to the hypothesis that they are important components in the molecular interaction of the parasite with the host. Phylogenetic and structural analysis of genes and their transcripts in these two classes shows that recent species-specific expansion of gene number for these families occurred separately in three different species of schistosomes. Enrichment of transposable elements in MEG and VAL genes in Schistosoma mansoni provides a credible mechanism for preferential expansion of gene numbers for these families. Analysis of the ratio between synonymous and nonsynonymous substitution rates (dN/dS) in the comparison between schistosome orthologs for the two classes of genes reveals significantly higher values when compared with a set of a control genes coding for secreted proteins, and for proteins previously localized in the tegument. Additional analyses of paralog genes indicate that exposure of the protein to the definitive host immune system is a determining factor leading to the higher than usual dN/dS values in those genes. The observation that two genes encoding S. mansoni vaccine candidate proteins, known to be exposed at the parasite surface, also display similar evolutionary dynamics suggests a broad response of the parasite to evolutionary pressure imposed by the definitive host immune system. PMID:25567667

  12. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without

  13. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    SciTech Connect

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without

  14. Towards Extreme Field Physics: Relativistic Optics and Particle Acceleration in the Transparent-Overdense Regime

    NASA Astrophysics Data System (ADS)

    Hegelich, B. Manuel

    2011-10-01

    A steady increase of on-target laser intensity with also increasing pulse contrast is leading to light-matter interactions of extreme laser fields with matter in new physics regimes which in turn enable a host of applications. A first example is the realization of interactions in the transperent-overdense regime (TOR), which is reached by interacting a highly relativistic (a0 >10), ultra high contrast laser pulse [1] with a solid density target, turning it transparent to the laser by the relativistic mass increase of the electrons. Thus, the interactions becomes volumetric, increasing the energy coupling from laser to plasma, facilitating a range of effects, including relativistic optics and pulse shaping, mono-energetic electron acceleration [3], highly efficient ion acceleration in the break-out afterburner regime [4], and the generation of relativistic and forward directed surface harmonics. Experiments at the LANL 130TW Trident laser facility successfully reached the TOR, and show relativistic pulse shaping beyond the Fourier limit, the acceleration of mono-energetic ~40 MeV electron bunches from solid targets, forward directed coherent relativistic high harmonic generation >1 keV Break-Out Afterburner (BOA) ion acceleration of Carbon to >1 GeV and Protons to >100 MeV. Carbon ions were accelerated with a conversion efficiency of >10% for ions >20 MeV and monoenergetic carbon ions with an energy spread of <20%, have been accelerated at up to ~500 MeV, demonstrating 3 out of 4 for key requirements for ion fast ignition. The shown results now approach or exceed the limits set by many applications from ICF diagnostics over ion fast ignition to medical physics. Furthermore, TOR targets traverse a wide range of HEDP parameter space during the interaction ranging from WDM conditions (e.g. brown dwarfs) to energy densities of ~1011 J/cm3 at peak, then dropping back to the underdense but extremely hot parameter range of gamma-ray bursts. Whereas today this regime can

  15. Dosimetric Characteristics of 6 MV Modified Beams by Physical Wedges of a Siemens Linear Accelerator.

    PubMed

    Zabihzadeh, Mansour; Birgani, Mohammad Javad Tahmasebi; Hoseini-Ghahfarokhi, Mojtaba; Arvandi, Sholeh; Hoseini, Seyed Mohammad; Fadaei, Mahbube

    2016-01-01

    Physical wedges still can be used as missing tissue compensators or filters to alter the shape of isodose curves in a target volume to reach an optimal radiotherapy plan without creating a hotspot. The aim of this study was to investigate the dosimetric properties of physical wedges filters such as off-axis photon fluence, photon spectrum, output factor and half value layer. The photon beam quality of a 6 MV Primus Siemens modified by 150 and 450 physical wedges was studied with BEAMnrc Monte Carlo (MC) code. The calculated present depth dose and dose profile curves for open and wedged photon beam were in good agreement with the measurements. Increase of wedge angle increased the beam hardening and this effect was more pronounced at the heal region. Using such an accurate MC model to determine of wedge factors and implementation of it as a calculation algorithm in the future treatment planning systems is recommended. PMID:27221838

  16. Topics in radiation at accelerators: Radiation physics for personnel and environmental protection

    SciTech Connect

    Cossairt, J.D.

    1993-11-01

    This report discusses the following topics: Composition of Accelerator Radiation Fields; Shielding of Electrons and Photons at Accelerators; Shielding of Hadrons at Accelerators; Low Energy Prompt Radiation Phenomena; Induced Radioactivity at Accelerators; Topics in Radiation Protection Instrumentation at Accelerators; and Accelerator Radiation Protection Program Elements.

  17. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  18. High-Fidelity Lattice Physics Capabilities of the SCALE Code System Using TRITON

    SciTech Connect

    DeHart, Mark D

    2007-01-01

    Increasing complexity in reactor designs suggests a need to reexamine of methods applied in spent-fuel characterization. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as space reactors and Generation IV power reactors also require calculational methods that provide accurate prediction of the isotopic inventory. New high-fidelity physics methods will be required to better understand the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light-water reactor designs. The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for reactor physics analysis. This paper provides a detailed description of TRITON in terms of its key components used in reactor calculations.

  19. Making FLASH an Open Code for the Academic High-Energy Density Physics Community

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Couch, S. M.; Dubey, A.; Gopal, S.; Graziani, C.; Lee, D.; Weide, K.; Xia, G.

    2010-11-01

    High-energy density physics (HEDP) is an active and growing field of research. DOE has recently decided to make FLASH a code for the academic HEDP community. FLASH is a modular and extensible compressible spatially adaptive hydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of existing advanced computer architectures, and has a broad user base. A rigorous software maintenance process allows the code to operate simultaneously in production and development modes. We summarize the work we are doing to add HEDP capabilities to FLASH. We are adding (1) Spitzer conductivity, (2) super time-stepping to handle the disparity between diffusion and advection time scales, and (3) a description of electrons, ions, and radiation (in the diffusion approximation) by 3 temperatures (3T) to both the hydrodynamics and the MHD solvers. We are also adding (4) ray tracing, (5) laser energy deposition, and (6) a multi-species equation of state incorporating ionization to the hydrodynamics solver; and (7) Hall MHD, and (8) the Biermann battery term to the MHD solver.

  20. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    NASA Astrophysics Data System (ADS)

    Williams, M. L.; Wright, R. Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) were extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. Several improvements in the original methodology resulted. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares methods to compute best estimate values for the Bell factor and group dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems.

  1. CTH: A three-dimensional, large deformation, shock wave physics code

    SciTech Connect

    McGlaun, J.M.; Zeigler, F.J.; Thompson, S.L.

    1987-01-01

    CTH is a code system under development at Sandia National Laboratories to model multidimensional, multi-material, large deformation, strong shock physics. One-dimensional, two-dimensional and three-dimensional Eulerian capabilities have been implemented first. Highly accurate analytic and tabular equations of state with solid, liquid, vapor, gas-liquid mixed phase and solid-liquid mixed phase capabilities can be used. The architecture of CTH was designed to accommodate other numerical approaches such as Lagrangian or ALE methods. CTH was carefully structured to run fast on a CRAY XMP. It is highly vectorized and multitasked. We briefly discuss the models used in CTH, techniques used to multitask the code and example calculations.

  2. CTH: A three-dimensional, large deformation, shock wave physics code

    NASA Astrophysics Data System (ADS)

    McGlaun, J. M.; Zeigler, F. J.; Thompson, S. L.

    1987-06-01

    CTH is a code system under development at Sandia National Laboratories to model multidimensional, multi-material, large deformation, strong shock physics. One-dimensional, two-dimensional and three-dimensional Eulerian capabilities have been implemented first. Highly accurate analytic and tabular equations of state with solid, liquid, vapor, gas-liquid mixed phase and solid-liquid mixed phase capabilities can be used. The architecture of CTH was designed to accommodate other numerical approaches such as Lagrangian or ALE methods. CTH was carefully structured to run fast on a CRAY XMP. It is highly vectorized and multitasked. We briefly discuss the models used in CTH, techniques used to multitask the code and example calculations.

  3. Physical property comparison of 11 soft denture lining materials as a function of accelerated aging.

    PubMed

    Dootz, E R; Koran, A; Craig, R G

    1993-01-01

    Soft denture-lining materials are an important treatment option for patients who have chronic soreness associated with dental prostheses. Three distinctly different types of materials are generally used. These are plasticized polymers or copolymers, silicones, or polyphosphazene fluoroelastomer. The acceptance of these materials by patients and dentists is variable. The objective of this study is to compare the tensile strength, percent elongation, hardness, tear strength, and tear energy of eight plasticized polymers or copolymers, two silicones, and one polyphosphazene fluoroelastomer. Tests were run at 24 hours after specimen preparation and repeated after 900 hours of accelerated aging in a Weather-Ometer device. The data indicated a wide range of physical properties for soft denture-lining materials and showed that accelerated aging dramatically affected the physical and mechanical properties of many of the elastomers. No soft denture liner proved to be superior to all others. The data obtained should provide clinicians with useful information for selecting soft denture lining materials for patients.

  4. Using the FLUKA Monte Carlo Code to Simulate the Interactions of Ionizing Radiation with Matter to Assist and Aid Our Understanding of Ground Based Accelerator Testing, Space Hardware Design, and Secondary Space Radiation Environments

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2015-01-01

    Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.

  5. Tsallis entropy and complexity theory in the understanding of physics of precursory accelerating seismicity.

    NASA Astrophysics Data System (ADS)

    Vallianatos, Filippos; Chatzopoulos, George

    2014-05-01

    Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds

  6. Physics guide to CEPXS: A multigroup coupled electron-photon cross-section generating code

    SciTech Connect

    Lorence, L.J. Jr.; Morel, J.E.; Valdez, G.D.; Los Alamos National Lab., NM; Applied Methods, Inc., Albuquerque, NM )

    1989-10-01

    CEPXS is a multigroup-Legendre cross-section generating code. The multigroup-Legendre cross sections produced by CEPXS enable coupled electron-photon transport calculations to be performed with the one-dimensional discrete ordinates code, ONEDANT. We recommend that the 1989 version of ONEDANT that contains linear-discontinuous spatial differencing and S2 synthetic acceleration be used for such calculations. CEPXS/ONEDANT effectively solves the Boltzmann-CSD transport equation for electrons and the Boltzmann transport equation for photons over the energy range from 100 MeV to 1.0 keV. The continuous slowing-down approximation is used for those electron interactions that result in small-energy losses. The extended transport correction is applied to the forward-peaked elastic scattering cross section for electrons. A standard multigroup-Legendre treatment is used for the other coupled electron-photon cross sections. CEPXS extracts electron cross-section information from the DATAPAC data set and photon cross-section information from Biggs-Lighthill data. The model that is used for ionization/relaxation in CEPXS is essentially the same as that employed in ITS. 43 refs., 8 figs.

  7. Physical-Layer Network Coding for VPN in TDM-PON

    NASA Astrophysics Data System (ADS)

    Wang, Qike; Tse, Kam-Hon; Chen, Lian-Kuan; Liew, Soung-Chang

    2012-12-01

    We experimentally demonstrate a novel optical physical-layer network coding (PNC) scheme over time-division multiplexing (TDM) passive optical network (PON). Full-duplex error-free communications between optical network units (ONUs) at 2.5 Gb/s are shown for all-optical virtual private network (VPN) applications. Compared to the conventional half-duplex communications set-up, our scheme can increase the capacity by 100% with power penalty smaller than 3 dB. Synchronization of two ONUs is not required for the proposed VPN scheme

  8. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research.

    PubMed

    Mocker, Anna; Bugiel, Sebastian; Auer, Siegfried; Baust, Günter; Colette, Andrew; Drake, Keith; Fiege, Katherina; Grün, Eberhard; Heckmann, Frieder; Helfert, Stefan; Hillier, Jonathan; Kempf, Sascha; Matt, Günter; Mellert, Tobias; Munsat, Tobin; Otto, Katharina; Postberg, Frank; Röser, Hans-Peter; Shu, Anthony; Sternovsky, Zoltán; Srama, Ralf

    2011-09-01

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut für Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s(-1). Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s(-1) and with diameters of between 0.05 μm and 5 μm. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and charges, and

  9. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research.

    PubMed

    Mocker, Anna; Bugiel, Sebastian; Auer, Siegfried; Baust, Günter; Colette, Andrew; Drake, Keith; Fiege, Katherina; Grün, Eberhard; Heckmann, Frieder; Helfert, Stefan; Hillier, Jonathan; Kempf, Sascha; Matt, Günter; Mellert, Tobias; Munsat, Tobin; Otto, Katharina; Postberg, Frank; Röser, Hans-Peter; Shu, Anthony; Sternovsky, Zoltán; Srama, Ralf

    2011-09-01

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut für Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s(-1). Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s(-1) and with diameters of between 0.05 μm and 5 μm. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and charges, and

  10. A 2 MV Van de Graaff accelerator as a tool for planetary and impact physics research

    SciTech Connect

    Mocker, Anna; Bugiel, Sebastian; Srama, Ralf; Auer, Siegfried; Baust, Guenter; Matt, Guenter; Otto, Katharina; Colette, Andrew; Drake, Keith; Kempf, Sascha; Munsat, Tobin; Shu, Anthony; Sternovsky, Zoltan; Fiege, Katherina; Postberg, Frank; Gruen, Eberhard; Heckmann, Frieder; Helfert, Stefan; Hillier, Jonathan; Mellert, Tobias; and others

    2011-09-15

    Investigating the dynamical and physical properties of cosmic dust can reveal a great deal of information about both the dust and its many sources. Over recent years, several spacecraft (e.g., Cassini, Stardust, Galileo, and Ulysses) have successfully characterised interstellar, interplanetary, and circumplanetary dust using a variety of techniques, including in situ analyses and sample return. Charge, mass, and velocity measurements of the dust are performed either directly (induced charge signals) or indirectly (mass and velocity from impact ionisation signals or crater morphology) and constrain the dynamical parameters of the dust grains. Dust compositional information may be obtained via either time-of-flight mass spectrometry of the impact plasma or direct sample return. The accurate and reliable interpretation of collected spacecraft data requires a comprehensive programme of terrestrial instrument calibration. This process involves accelerating suitable solar system analogue dust particles to hypervelocity speeds in the laboratory, an activity performed at the Max Planck Institut fuer Kernphysik in Heidelberg, Germany. Here, a 2 MV Van de Graaff accelerator electrostatically accelerates charged micron and submicron-sized dust particles to speeds up to 80 km s{sup -1}. Recent advances in dust production and processing have allowed solar system analogue dust particles (silicates and other minerals) to be coated with a thin conductive shell, enabling them to be charged and accelerated. Refinements and upgrades to the beam line instrumentation and electronics now allow for the reliable selection of particles at velocities of 1-80 km s{sup -1} and with diameters of between 0.05 {mu}m and 5 {mu}m. This ability to select particles for subsequent impact studies based on their charges, masses, or velocities is provided by a particle selection unit (PSU). The PSU contains a field programmable gate array, capable of monitoring in real time the particles' speeds and

  11. Technical Challenges and Scientific Payoffs of Muon BeamAccelerators for Particle Physics

    SciTech Connect

    Zisman, Michael S.

    2007-09-25

    Historically, progress in particle physics has largely beendetermined by development of more capable particle accelerators. Thistrend continues today with the recent advent of high-luminosityelectron-positron colliders at KEK and SLAC operating as "B factories,"the imminent commissioning of the Large Hadron Collider at CERN, and theworldwide development effort toward the International Linear Collider.Looking to the future, one of the most promising approaches is thedevelopment of muon-beam accelerators. Such machines have very highscientific potential, and would substantially advance thestate-of-the-art in accelerator design. A 20-50 GeV muon storage ringcould serve as a copious source of well-characterized electron neutrinosor antineutrinos (a Neutrino Factory), providing beams aimed at detectorslocated 3000-7500 km from the ring. Such long baseline experiments areexpected to be able to observe and characterize the phenomenon ofcharge-conjugation-parity (CP) violation in the lepton sector, and thusprovide an answer to one of the most fundamental questions in science,namely, why the matter-dominated universe in which we reside exists atall. By accelerating muons to even higher energies of several TeV, we canenvision a Muon Collider. In contrast with composite particles likeprotons, muons are point particles. This means that the full collisionenergy is available to create new particles. A Muon Collider has roughlyten times the energy reach of a proton collider at the same collisionenergy, and has a much smaller footprint. Indeed, an energy frontier MuonCollider could fit on the site of an existing laboratory, such asFermilab or BNL. The challenges of muon-beam accelerators are related tothe facts that i) muons are produced as a tertiary beam, with very large6D phase space, and ii) muons are unstable, with a lifetime at rest ofonly 2 microseconds. How these challenges are accommodated in theaccelerator design will be described. Both a Neutrino Factory and a Muon

  12. Additions and Improvements to the FLASH Code for Simulating High Energy Density Physics Experiments

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Daley, C.; Dubey, A.; Fatenejad, M.; Flocke, N.; Graziani, C.; Lee, D.; Tzeferacos, P.; Weide, K.

    2015-11-01

    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation hydrodynamics and magnetohydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of computer architectures, and has a broad user base. Extensive capabilities have been added to FLASH to make it an open toolset for the academic high energy density physics (HEDP) community. We summarize these capabilities, with particular emphasis on recent additions and improvements. These include advancements in the optical ray tracing laser package, with methods such as bi-cubic 2D and tri-cubic 3D interpolation of electron number density, adaptive stepping and 2nd-, 3rd-, and 4th-order Runge-Kutta integration methods. Moreover, we showcase the simulated magnetic field diagnostic capabilities of the code, including induction coils, Faraday rotation, and proton radiography. We also describe several collaborations with the National Laboratories and the academic community in which FLASH has been used to simulate HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under grant PHY-0903997.

  13. Physics design of a 100 keV acceleration grid system for the diagnostic neutral beam for international tokamak experimental reactor.

    PubMed

    Singh, M J; De Esch, H P L

    2010-01-01

    This paper describes the physics design of a 100 keV, 60 A H(-) accelerator for the diagnostic neutral beam (DNB) for international tokamak experimental reactor (ITER). The accelerator is a three grid system comprising of 1280 apertures, grouped in 16 groups with 80 apertures per beam group. Several computer codes have been used to optimize the design which follows the same philosophy as the ITER Design Description Document (DDD) 5.3 and the 1 MeV heating and current drive beam line [R. Hemsworth, H. Decamps, J. Graceffa, B. Schunke, M. Tanaka, M. Dremel, A. Tanga, H. P. L. De Esch, F. Geli, J. Milnes, T. Inoue, D. Marcuzzi, P. Sonato, and P. Zaccaria, Nucl. Fusion 49, 045006 (2009)]. The aperture shapes, intergrid distances, and the extractor voltage have been optimized to minimize the beamlet divergence. To suppress the acceleration of coextracted electrons, permanent magnets have been incorporated in the extraction grid, downstream of the cooling water channels. The electron power loads on the extractor and the grounded grids have been calculated assuming 1 coextracted electron per ion. The beamlet divergence is calculated to be 4 mrad. At present the design for the filter field of the RF based ion sources for ITER is not fixed, therefore a few configurations of the same have been considered. Their effect on the transmission of the electrons and beams through the accelerator has been studied. The OPERA-3D code has been used to estimate the aperture offset steering constant of the grounded grid and the extraction grid, the space charge interaction between the beamlets and the kerb design required to compensate for this interaction. All beamlets in the DNB must be focused to a single point in the duct, 20.665 m from the grounded grid, and the required geometrical aimings and aperture offsets have been calculated.

  14. Anterior cruciate ligament augmentation for rotational instability following primary reconstruction with an accelerated physical therapy protocol.

    PubMed

    Carey, Timothy; Oliver, David; Pniewski, Josh; Mueller, Terry; Bojescul, John

    2013-01-01

    The purpose of the present study is to present the results of anterior cruciate ligament (ACL) augmentation for patients having rotational instability despite an intact vertical graft in lieu of conventional revision ACL reconstruction. ACL augmentation surgery with a horizontal graft was performed to augment a healed vertical graft on five patients and an accelerated rehabilitation protocol was instituted. Functional outcomes were assessed by the Lower Extremity Functional Scale (LEFS) and the Modified Cincinnati Rating System (MCRS). All patients completed physical therapy within 5 months and were able to return to full military duty without limitation. LEFS and MCRS were significantly improved. ACL augmentation with a horizontal graft provides an excellent alternative to ACL revision reconstruction for patients with an intact vertical graft, allowing an earlier return to duty for military service members.

  15. GPU-based acceleration of free energy calculations in solid state physics

    NASA Astrophysics Data System (ADS)

    Januszewski, Michał; Ptok, Andrzej; Crivelli, Dawid; Gardas, Bartłomiej

    2015-07-01

    Obtaining a thermodynamically accurate phase diagram through numerical calculations is a computationally expensive problem that is crucially important to understanding the complex phenomena of solid state physics, such as superconductivity. In this work we show how this type of analysis can be significantly accelerated through the use of modern GPUs. We illustrate this with a concrete example of free energy calculation in multi-band iron-based superconductors, known to exhibit a superconducting state with oscillating order parameter (OP). Our approach can also be used for classical BCS-type superconductors. With a customized algorithm and compiler tuning we are able to achieve a 19×speedup compared to the CPU (119×compared to a single CPU core), reducing calculation time from minutes to mere seconds, enabling the analysis of larger systems and the elimination of finite size effects.

  16. Beam Polarization at the ILC: the Physics Impact and the Accelerator Solutions

    SciTech Connect

    Aurand, B.; Bailey, I.; Bartels, C.; Brachmann, A.; Clarke, J.; Hartin, A.; Hauptman, J.; Helebrant, C.; Hesselbach, S.; Kafer, D.; List, J.; Lorenzon, W.; Marchesini, I.; Monig, Klaus; Moffeit, K.C.; Moortgat-Pick, G.; Riemann, S.; Schalicke, A.; Schuler, P.; Starovoitov, P.; Ushakov, A.; /DESY /DESY, Zeuthen /Bonn U. /SLAC

    2011-11-23

    In this contribution accelerator solutions for polarized beams and their impact on physics measurements are discussed. Focus are physics requirements for precision polarimetry near the interaction point and their realization with polarized sources. Based on the ILC baseline programme as described in the Reference Design Report (RDR), recent developments are discussed and evaluated taking into account physics runs at beam energies between 100 GeV and 250 GeV, as well as calibration runs on the Z-pole and options as the 1TeV upgrade and GigaZ. The studies, talks and discussions presented at this conference demonstrated that beam polarization and its measurement are crucial for the physics success of any future linear collider. To achieve the required precision it is absolutely decisive to employ multiple devices for testing and controlling the systematic uncertainties of each polarimeter. The polarimetry methods for the ILC are complementary: with the upstream polarimeter the measurements are performed in a clean environment, they are fast and allow to monitor time-dependent variations of polarization. The polarimeter downstream the IP will measure the disrupted beam resulting in high background and much lower statistics, but it allows access to the depolarization at the IP. Cross checks between the polarimeter results give redundancy and inter-calibration which is essential for high precision measurements. Current plans and issues for polarimeters and also energy spectrometers in the Beam Delivery System of the ILC are summarized in reference [28]. The ILC baseline design allows already from the beginning the operation with polarized electrons and polarized positrons provided the spin rotation and the fast helicity reversal for positrons will be implemented. A reversal of the positron helicity significantly slower than that of electrons is not recommended to not compromise the precision and hence the success of the ILC. Recently to use calibration data at the Z

  17. KRAM, A lattice physics code for modeling the detailed depletion of gadolinia isotopes in BWR lattice designs

    SciTech Connect

    Knott, D.; Baratta, A. )

    1990-01-01

    Lattice physics codes are used to deplete the burnable isotopes present in each lattice design, calculate the buildup of fission products, and generate the few-group cross-section data needed by the various nodal simulator codes. Normally, the detailed depletion of gadolinia isotopes is performed outside the lattice physics code in a one-dimensional environment using an onion-skin model, such as the method used in MICBURN. Results from the onion-skin depletion, in the form of effective microscopic absorption cross sections for the gadolinia, are then used by the lattice physics code during the lattice-depletion analysis. The reactivity of the lattice at any point in the cycle depends to a great extent on the amount of gadolinia present. In an attempt to improve the modeling of gadolinia depletion from fresh boiling water reactor (BWR) fuel designs, the electric Power Research Institute (EPRI) lattice-physics code CPM-2 has been modified extensively. In this paper, the modified code KRAM is described, and results from various lattice-depletion analyses are discussed in comparison with results from standard CPM-2 and CASMO-2 analyses.

  18. Accelerator Technology and High Energy Physics Experiments, Photonics Applications and Web Engineering, Wilga, May 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    The paper is the second part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with accelerator technology and high energy physics experiments. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the XXXth Jubilee SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonicselectronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-275].

  19. The physics of compensating calorimetry and the new CALOR89 code system

    SciTech Connect

    Gabriel, T.A.; Brau, J.E.; Bishop, B.L.

    1989-03-01

    Much of the understanding of the physics of calorimetry has come from the use of excellent radiation transport codes. A new understanding of compensating calorimetry was introduced four years ago following detailed studies with a new CALOR system. Now, the CALOR system has again been revised to reflect a better comprehension of high energy nuclear collisions by incorporating a modified high energy fragmentation model from FLUKA87. This revision will allow for the accurate analysis of calorimeters at energies of 100's of GeV. Presented in this paper is a discussion of compensating calorimetry, the new CALOR system, the revisions to HETC, and recently generated calorimeter related data on modes of energy deposition and secondary neutron production (E < 50 MeV) in infinite iron and uranium blocks. 38 refs., 5 figs., 5 tabs.

  20. Physical processes at work in sub-30 fs, PW laser pulse-driven plasma accelerators: Towards GeV electron acceleration experiments at CILEX facility

    NASA Astrophysics Data System (ADS)

    Beck, A.; Kalmykov, S. Y.; Davoine, X.; Lifschitz, A.; Shadwick, B. A.; Malka, V.; Specka, A.

    2014-03-01

    Optimal regimes and physical processes at work are identified for the first round of laser wakefield acceleration experiments proposed at a future CILEX facility. The Apollon-10P CILEX laser, delivering fully compressed, near-PW-power pulses of sub-25 fs duration, is well suited for driving electron density wakes in the blowout regime in cm-length gas targets. Early destruction of the pulse (partly due to energy depletion) prevents electrons from reaching dephasing, limiting the energy gain to about 3 GeV. However, the optimal operating regimes, found with reduced and full three-dimensional particle-in-cell simulations, show high energy efficiency, with about 10% of incident pulse energy transferred to 3 GeV electron bunches with sub-5% energy spread, half-nC charge, and absolutely no low-energy background. This optimal acceleration occurs in 2 cm length plasmas of electron density below 1018 cm-3. Due to their high charge and low phase space volume, these multi-GeV bunches are tailor-made for staged acceleration planned in the framework of the CILEX project. The hallmarks of the optimal regime are electron self-injection at the early stage of laser pulse propagation, stable self-guiding of the pulse through the entire acceleration process, and no need for an external plasma channel. With the initial focal spot closely matched for the nonlinear self-guiding, the laser pulse stabilizes transversely within two Rayleigh lengths, preventing subsequent evolution of the accelerating bucket. This dynamics prevents continuous self-injection of background electrons, preserving low phase space volume of the bunch through the plasma. Near the end of propagation, an optical shock builds up in the pulse tail. This neither disrupts pulse propagation nor produces any noticeable low-energy background in the electron spectra, which is in striking contrast with most of existing GeV-scale acceleration experiments.

  1. GeneFizz: A web tool to compare genetic (coding/non-coding) and physical (helix/coil) segmentations of DNA sequences. Gene discovery and evolutionary perspectives.

    PubMed

    Yeramian, Edouard; Jones, Louis

    2003-07-01

    The GeneFizz (http://pbga.pasteur.fr/GeneFizz) web tool permits the direct comparison between two types of segmentations for DNA sequences (possibly annotated): the coding/non-coding segmentation associated with genomic annotations (simple genes or exons in split genes) and the physics-based structural segmentation between helix and coil domains (as provided by the classical helix-coil model). There appears to be a varying degree of coincidence for different genomes between the two types of segmentations, from almost perfect to non-relevant. Following these two extremes, GeneFizz can be used for two purposes: ab initio physics-based identification of new genes (as recently shown for Plasmodium falciparum) or the exploration of possible evolutionary signals revealed by the discrepancies observed between the two types of information.

  2. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  3. A nuclear physics program at the Rare Isotope Beams Accelerator Facility in Korea

    SciTech Connect

    Moon, Chang-Bum

    2014-04-15

    This paper outlines the new physics possibilities that fall within the field of nuclear structure and astrophysics based on experiments with radioactive ion beams at the future Rare Isotope Beams Accelerator facility in Korea. This ambitious multi-beam facility has both an Isotope Separation On Line (ISOL) and fragmentation capability to produce rare isotopes beams (RIBs) and will be capable of producing and accelerating beams of wide range mass of nuclides with energies of a few to hundreds MeV per nucleon. The large dynamic range of reaccelerated RIBs will allow the optimization in each nuclear reaction case with respect to cross section and channel opening. The low energy RIBs around Coulomb barrier offer nuclear reactions such as elastic resonance scatterings, one or two particle transfers, Coulomb multiple-excitations, fusion-evaporations, and direct capture reactions for the study of the very neutron-rich and proton-rich nuclides. In contrast, the high energy RIBs produced by in-flight fragmentation with reaccelerated ions from the ISOL enable to explore the study of neutron drip lines in intermediate mass regions. The proposed studies aim at investigating the exotic nuclei near and beyond the nucleon drip lines, and to explore how nuclear many-body systems change in such extreme regions by addressing the following topics: the evolution of shell structure in areas of extreme proton to neutron imbalance; the study of the weak interaction in exotic decay schemes such as beta-delayed two-neutron or two-proton emission; the change of isospin symmetry in isobaric mirror nuclei at the drip lines; two protons or two neutrons radioactivity beyond the drip lines; the role of the continuum states including resonant states above the particle-decay threshold in exotic nuclei; and the effects of nuclear reaction rates triggered by the unbound proton-rich nuclei on nuclear astrophysical processes.

  4. Mount Aragats as a stable electron accelerator for atmospheric high-energy physics research

    NASA Astrophysics Data System (ADS)

    Chilingarian, Ashot; Hovsepyan, Gagik; Mnatsakanyan, Eduard

    2016-03-01

    Observation of the numerous thunderstorm ground enhancements (TGEs), i.e., enhanced fluxes of electrons, gamma rays, and neutrons detected by particle detectors located on the Earth's surface and related to the strong thunderstorms above it, helped to establish a new scientific topic—high-energy physics in the atmosphere. Relativistic runaway electron avalanches (RREAs) are believed to be a central engine initiating high-energy processes in thunderstorm atmospheres. RREAs observed on Mount Aragats in Armenia during the strongest thunderstorms and simultaneous measurements of TGE electron and gamma-ray energy spectra proved that RREAs are a robust and realistic mechanism for electron acceleration. TGE research facilitates investigations of the long-standing lightning initiation problem. For the last 5 years we were experimenting with the "beams" of "electron accelerators" operating in the thunderclouds above the Aragats research station. Thunderstorms are very frequent above Aragats, peaking in May-June, and almost all of them are accompanied with enhanced particle fluxes. The station is located on a plateau at an altitude 3200 asl near a large lake. Numerous particle detectors and field meters are located in three experimental halls as well as outdoors; the facilities are operated all year round. All relevant information is being gathered, including data on particle fluxes, fields, lightning occurrences, and meteorological conditions. By the example of the huge thunderstorm that took place at Mount Aragats on August 28, 2015, we show that simultaneous detection of all the relevant data allowed us to reveal the temporal pattern of the storm development and to investigate the atmospheric discharges and particle fluxes.

  5. Summary Report of Working Group 3: High Energy Density Physics and Exotic Acceleration Schemes

    SciTech Connect

    Shvets, Gennady; Schoessow, Paul

    2006-11-27

    This report summarizes presented results and discussions in the Working Group 3 at the Twelfth Advanced Accelerator Concepts Workshop in 2006. Presentations on varied topics, such as laser proton acceleration, novel radiation sources, active medium accelerators, and many others, are reviewed, and the status and future directions of research in these areas are summarized.

  6. Operational Radiation Protection in High-Energy Physics Accelerators: Implementation of ALARA in Design and Operation of Accelerators

    SciTech Connect

    Fasso, A.; Rokni, S.; /SLAC

    2011-06-30

    It used to happen often, to us accelerator radiation protection staff, to be asked by a new radiation worker: ?How much dose am I still allowed?? And we smiled looking at the shocked reaction to our answer: ?You are not allowed any dose?. Nowadays, also thanks to improved training programs, this kind of question has become less frequent, but it is still not always easy to convince workers that staying below the exposure limits is not sufficient. After all, radiation is still the only harmful agent for which this is true: for all other risks in everyday life, from road speed limits to concentration of hazardous chemicals in air and water, compliance to regulations is ensured by keeping below a certain value. It appears that a tendency is starting to develop to extend the radiation approach to other pollutants (1), but it will take some time before the new attitude makes it way into national legislations.

  7. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1994-01-01

    This paper presents work performed on the generation and physics of acoustic waves in the solar atmosphere. The investigators have incorporated spatial and temporal turbulent energy spectra in a newly corrected version of the Lighthill-Stein theory of acoustic wave generation in order to calculate the acoustic wave energy fluxes generated in the solar convective zone. The investigators have also revised and improved the treatment of the generation of magnetic flux tube waves, which can carry energy along the tubes far away from the region of their origin, and have calculated the tube wave energy fluxes for the sun. They also examine the transfer of the wave energy originated in the solar convective zone to the outer atmospheric layers through computation of wave propagation and dissipation in highly nonhomogeneous solar atmosphere. These waves may efficiently heat the solar atmosphere and the heating will be especially significant in the chromospheric network. It is also shown that the role played by Alfven waves in solar wind acceleration and coronal hole heating is dominant. The second part of the project concerned investigation of wave propagation in highly inhomogeneous stellar atmospheres using an approach based on an analytic tool developed by Musielak, Fontenla, and Moore. In addition, a new technique based on Dirac equations has been developed to investigate coupling between different MHD waves propagating in stratified stellar atmospheres.

  8. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1993-01-01

    This paper presents work performed on the generation and physics of acoustic waves in the solar atmosphere. The investigators have incorporated spatial and temporal turbulent energy spectra in a newly corrected version of the Lighthill-Stein theory of acoustic wave generation in order to calculate the acoustic wave energy fluxes generated in the solar convective zone. The investigators have also revised and improved the treatment of the generation of magnetic flux tube waves, which can carry energy along the tubes far away from the region of their origin, and have calculated the tube energy fluxes for the sun. They also examine the transfer of the wave energy originated in the solar convective zone to the outer atmospheric layers through computation of wave propagation and dissipation in highly nonhomogeneous solar atmosphere. These waves may efficiently heat the solar atmosphere and the heating will be especially significant in the chromospheric network. It is also shown that the role played by Alfven waves in solar wind acceleration and coronal hole heating is dominant. The second part of the project concerned investigation of wave propagation in highly inhomogeneous stellar atmospheres using an approach based on an analytic tool developed by Musielak, Fontenla, and Moore. In addition, a new technique based on Dirac equations has been developed to investigate coupling between different MHD waves propagating in stratified stellar atmospheres.

  9. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  10. Towards a novel laser-driven method of exotic nuclei extraction-acceleration for fundamental physics and technology

    NASA Astrophysics Data System (ADS)

    Nishiuchi, M.; Sakaki, H.; Esirkepov, T. Zh.; Nishio, K.; Pikuz, T. A.; Faenov, A. Ya.; Skobelev, I. Yu.; Orlandi, R.; Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y.; Koura, H.; Kando, M.; Yamauchi, T.; Watanabe, Y.; Bulanov, S. V.; Kondo, K.; Imai, K.; Nagamiya, S.

    2016-04-01

    A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction-acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.

  11. Status of MARS Code

    SciTech Connect

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  12. An MCNPX accelerator beam source

    SciTech Connect

    Durkee, Joe W.; Elson, Jay S.; Jason, Andrew; Johns, Russell C.; Waters, Laurie S.

    2009-06-04

    MCNPX is a powerful Monte Carlo code that can be used to conduct sophisticated radiation-transport simulations involving complex physics and geometry. Although MCNPX possesses a wide assortment of standardized modeling tools, there are instances in which a user's needs can eclipse existing code capabilities. Fortunately, although it may not be widely known, MCNPX can accommodate many customization needs. In this article, we demonstrate source-customization capability for a new SOURCE subroutine as part of our development to enable simulations involving accelerator beams for active-interrogation studies. Simulation results for a muon beam are presented to illustrate the new accelerator-source capability.

  13. Modeling the physical structure of star-forming regions with LIME, a 3D radiative transfer code

    NASA Astrophysics Data System (ADS)

    Quénard, D.; Bottinelli, S.; Caux, E.

    2016-05-01

    The ability to predict line emission is crucial in order to make a comparison with observations. From LTE to full radiative transfer codes, the goal is always to derive the most accurately possible the physical properties of the source. Non-LTE calculations can be very time consuming but are needed in most of the cases since many studied regions are far from LTE.

  14. J-PAS: The Javalambre-Physics of the Accelerating Universe Astrophysical Survey

    NASA Astrophysics Data System (ADS)

    Dupke, Renato a.; Benitez, Narciso; Moles, Mariano; Sodre, Laerte; J-PAS Collaboration

    2015-08-01

    The Javalambre-Physics of the Accelerating Universe Astrophysical Survey (J-PAS) is a narrow band, very wide field Cosmological Survey to be carried out from the Javalambre Astrophysical Observatory in Spain with a dedicated 2.5m telescope and a 4.7deg^2 camera with 1.2Gpix. Starting in 2016, J-PAS will observe 8600 deg^2 of the Northern Sky and measure 0.003(1+z) precision photometric redshifts for nearly 1E08 LRG and ELG galaxies plus several million QSOs, sampling an effective volume of ~14 Gpc^3 up to z = 1.3. J-PAS will also detect and measure the mass of more than a hundred thousand galaxy clusters, setting constrains on Dark Energy which rival those obtained from BAO measurements.The key to the J-PAS potential is its innovative approach the combination of 54 145°A filters, placed 100°A apart, and a multi-degree field of view (FOV) which makes it a powerful “redshift machine”, with the survey speed of a 4000 multiplexing low resolution spectrograph, but many times cheaper and much faster to build. Moreover, since the J-PAS camera is equivalent to a very large, 4.7deg^2 “IFU”, it will produce a time-resolved, 3D image of the Northern Sky with a very wide range of Astrophysical applications in Galaxy Evolution, the nearby Universe and the study of resolved stellar populations. J-PAS will have a lasting legacy value in many areas of Astrophysics, serving as a fundamental dataset for future Cosmological projects.Here, we present the overall description, status and scientific potential of the survey.

  15. J-PAS: The Javalambre-Physics of the Accelerating Universe Astrophysical Survey

    NASA Astrophysics Data System (ADS)

    Dupke, Renato A.; Benitez, Narciso; Moles, Mariano; Sodre, Laerte; Irwin, Jimmy; J-PAS Collaboration

    2016-01-01

    The Javalambre-Physics of the Accelerating Universe Astrophysical Survey (J-PAS) is a narrow band, very wide field Cosmological Survey to be carried out from the Javalambre Astrophysical Observatory in Spain with a dedicated 2.5m telescope and a 4.7deg^2 camera with 1.2Gpix. Starting in 2016, J-PAS will observe 8600 deg^2 of the Northern Sky and measure 0.003(1+z) precision photometric redshifts for nearly 1E08 LRG and ELG galaxies plus several million QSOs, sampling an effective volume of ~14 Gpc^3 up to z = 1.3. J-PAS will also detect and measure the mass of more than a hundred thousand galaxy clusters, setting constrains on Dark Energy which rival those obtained from BAO measurements.The key to the J-PAS potential is its innovative approach the combination of 54 145°A filters, placed 100°A apart, and a multi-degree field of view (FOV) which makes it a powerful "redshift machine", with the survey speed of a 4000 multiplexing low resolution spectrograph, but many times cheaper and much faster to build. Moreover, since the J-PAS camera is equivalent to a very large, 4.7deg^2 "IFU", it will produce a time-resolved, 3D image of the Northern Sky with a very wide range of Astrophysical applications in Galaxy Evolution, the nearby Universe and the study of resolved stellar populations. J-PAS will have a lasting legacy value in many areas of Astrophysics, serving as a fundamental dataset for future Cosmological projects.Here, we present the overall description, status and scientific potential of the survey.

  16. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  17. Can Accelerators Accelerate Learning?

    SciTech Connect

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-10

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ)[1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  18. New VACUUM: towards an object oriented version of the code with additional physics capability

    NASA Astrophysics Data System (ADS)

    Chance, M. S.; Pletzer, A.; Okabayashi, M.; Chu, M. S.; Turnbull, A. D.; Glasser, A. H.

    2001-10-01

    The VACUUM Code^a which was initially created to provide the outer boundary conditions and diagnostics to the PEST and NOVA Fourier codes has been substantially modified to be interfaced to a variety of other stability codes, including DCON, the finite element GATO code as well as the nonlinear NIMROD and M3D codes. It now also includes the ability to model the feedback stabilization of external MHD modes in tokamaks so that the effects of a thin resistive shell and the feedback circuitry together with the associated sensor loops and feedback coils are incorporated. To improve the interface to an increasing number of codes, structural changes addressing portability and memory management are under development: A Fortran 90 version of the code using dynamic memory allocation is in progress, and furthermore, VACUUM will be transformed from a standalone code using I/O files to one using a set of library calls where input and output data are communicated through "set" and "get" calls. The benefit of such an application programming interface (API) layout is to allow VACUUM to be embedded in large packages (e.g. TRANSP) , scripting environments (e.g. Python, Matlab, IDL) or wrapped into C++ code to provide object oriented features. ^aM.S. Chance, Phys. Plasmas 4, 2161 (1997).

  19. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    SciTech Connect

    Wilson, L.T.; Hertel, E.; Schwalbe, L.; Wingate, C.

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate. After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.

  20. J-PAS: The Javalambre Physics of the Accelerated Universe Astrophysical Survey

    NASA Astrophysics Data System (ADS)

    Cepa, J.; Benítez, N.; Dupke, R.; Moles, M.; Sodré, L.; Cenarro, A. J.; Marín-Franch, A.; Taylor, K.; Cristóbal, D.; Fernández-Soto, A.; Mendes de Oliveira, C.; Abramo, L. R.; Alcaniz, J. S.; Overzier, R.; Hernández-Monteagudo, A.; Alfaro, E. J.; Kanaan, A.; Carvano, M.; Reis, R. R. R.; J-PAS Team

    2016-10-01

    The Javalambre Physics of the Accelerated Universe Astrophysical Survey (J-PAS) is a narrow band, very wide field Cosmological Survey to be carried out from the Javalambre Observatory in Spain with a purpose-built, dedicated 2.5 m telescope and a 4.7 sq.deg. camera with 1.2 Gpix. Starting in late 2016, J-PAS will observe 8500 sq.deg. of Northern Sky and measure Δz˜0.003(1+z) photo-z for 9× 107 LRG and ELG galaxies plus several million QSOs, sampling an effective volume of ˜ 14 Gpc3 up to z=1.3 and becoming the first radial BAO experiment to reach Stage IV. J-PAS will detect 7× 105 galaxy clusters and groups, setting constraints on Dark Energy which rival those obtained from its BAO measurements. Thanks to the superb characteristics of the site (seeing ˜ 0.7 arcsec), J-PAS is expected to obtain a deep, sub-arcsec image of the Northern sky, which combined with its unique photo-z precision will produce one of the most powerful cosmological lensing surveys before the arrival of Euclid. J-PAS's unprecedented spectral time domain information will enable a self-contained SN survey that, without the need for external spectroscopic follow-up, will detect, classify and measure σz˜ 0.5 redshifts for ˜ 4000 SNeIa and ˜ 900 core-collapse SNe. The key to the J-PAS potential is its innovative approach: a contiguous system of 54 filters with 145 Å width, placed 100 Å apart over a multi-degree FoV is a powerful redshift machine, with the survey speed of a 4000 multiplexing low resolution spectrograph, but many times cheaper and much faster to build. The J-PAS camera is equivalent to a 4.7 sq.deg. IFU and it will produce a time-resolved, 3D image of the Northern Sky with a very wide range of Astrophysical applications in Galaxy Evolution, the nearby Universe and the study of resolved stellar populations.

  1. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; Awe, T. J.; Bailey, J. E.; Bennett, N. L.; Breden, E. W.; Campbell, E. M.; Clark, R. E.; Cooper, R. A.; Cuneo, M. E.; Ennis, J. B.; Fehl, D. L.; Genoni, T. C.; Gomez, M. R.; Greiser, G. W.; Gruner, F. R.; Herrmann, M. C.; Hutsel, B. T.; Jennings, C. A.; Jobe, D. O.; Jones, B. M.; Jones, M. C.; Jones, P. A.; Knapp, P. F.; Lash, J. S.; LeChien, K. R.; Leckbee, J. J.; Leeper, R. J.; Lewis, S. A.; Long, F. W.; Lucero, D. J.; Madrid, E. A.; Martin, M. R.; Matzen, M. K.; Mazarakis, M. G.; McBride, R. D.; McKee, G. R.; Miller, C. L.; Moore, J. K.; Mostrom, C. B.; Mulville, T. D.; Peterson, K. J.; Porter, J. L.; Reisman, D. B.; Rochau, G. A.; Rochau, G. E.; Rose, D. V.; Rovang, D. C.; Savage, M. E.; Sceiford, M. E.; Schmit, P. F.; Schneider, R. F.; Schwarz, J.; Sefkow, A. B.; Sinars, D. B.; Slutz, S. A.; Spielman, R. B.; Stoltzfus, B. S.; Thoma, C.; Vesey, R. A.; Wakeland, P. E.; Welch, D. R.; Wisher, M. L.; Woodworth, J. R.

    2015-11-01

    We have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated by the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator's water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator's physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD) simulations

  2. User's guide for TWODANT: a code package for two-dimensional, diffusion-accelerated, neutral-particle transport. Revision 1

    SciTech Connect

    Alcouffe, R E; Brinkley, F W; Marr, D R; O'Dell, R D

    1984-10-01

    TWODANT solves the two-dimensional multigroup transport equation in x-y, r-z, and r-theta geometries. Both regular and adjoint, inhomogeneous (fixed source and homogeneous (k-effective and eigenvalue search)) problems subject to vacuum, reflective, periodic, white, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. TWODANT numerically solves the two-dimensional multigroup form of the neutral-particle, steady-state Boltzmann transport equation. The discrete-ordinates form of approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for space-angle discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method. The diffusion solver uses the multigrid method and Chebychev acceleration of the fission source.

  3. Physical Property Changes in Plutonium from Accelerated Aging using Pu-238 Enrichment

    SciTech Connect

    Chung, B W; Choi, B W; Saw, C K; Thompson, S R; Woods, C H; Hopkins, D J; Ebbinghaus, B B

    2006-12-20

    We present changes in volume, immersion density, and tensile properties observed from accelerated aged plutonium alloys. Accelerated alloys (or spiked alloys) are plutonium alloys enriched with approximately 7.5 weight percent of the faster-decaying {sup 238}Pu to accelerate the aging process by approximately 17 times the rate of unaged weapons-grade plutonium. After sixty equivalent years of aging on spiked alloys, the dilatometry shows the samples at 35 C have swelled in volume by 0.15 to 0.17 % and now exhibit a near linear volume increase due to helium in-growth. The immersion density of spiked alloys shows a decrease in density, similar normalized volumetric changes (expansion) for spiked alloys. Tensile tests show increasing yield and engineering ultimate strength as spiked alloys are aged.

  4. A Treasure Trove of Physics from a Common Source-Automobile Acceleration Data

    NASA Astrophysics Data System (ADS)

    Graney, Christopher M.

    2005-11-01

    What is better than interesting, challenging physics with good data free for the taking to which everyone can relate? That's what is available to anyone who digs into the reams of automobile performance tests that have been available in popular magazines since the 1950s. Opportunities to do and teach interesting physics abound, as evidenced by the frequent appearance of "physics of cars" articles in The Physics Teacher.1-6

  5. The development and performance of a message-passing version of the PAGOSA shock-wave physics code

    SciTech Connect

    Gardner, D.R.; Vaughan, C.T.

    1997-10-01

    A message-passing version of the PAGOSA shock-wave physics code has been developed at Sandia National Laboratories for multiple-instruction, multiple-data stream (MIMD) computers. PAGOSA is an explicit, Eulerian code for modeling the three-dimensional, high-speed hydrodynamic flow of fluids and the dynamic deformation of solids under high rates of strain. It was originally developed at Los Alamos National Laboratory for the single-instruction, multiple-data (SIMD) Connection Machine parallel computers. The performance of Sandia`s message-passing version of PAGOSA has been measured on two MIMD machines, the nCUBE 2 and the Intel Paragon XP/S. No special efforts were made to optimize the code for either machine. The measured scaled speedup (computational time for a single computational node divided by the computational time per node for fixed computational load) and grind time (computational time per cell per time step) show that the MIMD PAGOSA code scales linearly with the number of computational nodes used on a variety of problems, including the simulation of shaped-charge jets perforating an oil well casing. Scaled parallel efficiencies for MIMD PAGOSA are greater than 0.70 when the available memory per node is filled (or nearly filled) on hundreds to a thousand or more computational nodes on these two machines, indicating that the code scales very well. Thus good parallel performance can be achieved for complex and realistic applications when they are first implemented on MIMD parallel computers.

  6. The Use of Acceleration to Code for Animal Behaviours; A Case Study in Free-Ranging Eurasian Beavers Castor fiber

    PubMed Central

    Graf, Patricia M.; Wilson, Rory P.; Qasem, Lama; Hackländer, Klaus; Rosell, Frank

    2015-01-01

    Recent technological innovations have led to the development of miniature, accelerometer-containing electronic loggers which can be attached to free-living animals. Accelerometers provide information on both body posture and dynamism which can be used as descriptors to define behaviour. We deployed tri-axial accelerometer loggers on 12 free-ranging Eurasian beavers Castor fiber in the county of Telemark, Norway, and on four captive beavers (two Eurasian beavers and two North American beavers C. canadensis) to corroborate acceleration signals with observed behaviours. By using random forests for classifying behavioural patterns of beavers from accelerometry data, we were able to distinguish seven behaviours; standing, walking, swimming, feeding, grooming, diving and sleeping. We show how to apply the use of acceleration to determine behaviour, and emphasise the ease with which this non-invasive method can be implemented. Furthermore, we discuss the strengths and weaknesses of this, and the implementation of accelerometry on animals, illustrating limitations, suggestions and solutions. Ultimately, this approach may also serve as a template facilitating studies on other animals with similar locomotor modes and deliver new insights into hitherto unknown aspects of behavioural ecology. PMID:26317623

  7. User's manual for ONEDANT: a code package for one-dimensional, diffusion-accelerated, neutral-particle transport

    SciTech Connect

    O'Dell, R.D.; Brinkley, F.W. Jr.; Marr, D.R.

    1982-02-01

    ONEDANT is designed for the CDC-7600, but the program has been implemented and run on the IBM-370/190 and CRAY-I computers. ONEDANT solves the one-dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (k/sub eff/ and eigenvalue search) problems subject to vacuum, reflective, periodic, white, albedo, or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. ONEDANT numerically solves the one-dimensional, multigroup form of the neutral-particle, steady-state form of the Boltzmann transport equation. The discrete-ordinates approximation is used for treating the angular variation of the particle distribution and the diamond-difference scheme is used for phase space discretization. Negative fluxes are eliminated by a local set-to-zero-and-correct algorithm. A standard inner (within-group) iteration, outer (energy-group-dependent source) iteration technique is used. Both inner and outer iterations are accelerated using the diffusion synthetic acceleration method. (WHK)

  8. The Use of Acceleration to Code for Animal Behaviours; A Case Study in Free-Ranging Eurasian Beavers Castor fiber.

    PubMed

    Graf, Patricia M; Wilson, Rory P; Qasem, Lama; Hackländer, Klaus; Rosell, Frank

    2015-01-01

    Recent technological innovations have led to the development of miniature, accelerometer-containing electronic loggers which can be attached to free-living animals. Accelerometers provide information on both body posture and dynamism which can be used as descriptors to define behaviour. We deployed tri-axial accelerometer loggers on 12 free-ranging Eurasian beavers Castor fiber in the county of Telemark, Norway, and on four captive beavers (two Eurasian beavers and two North American beavers C. canadensis) to corroborate acceleration signals with observed behaviours. By using random forests for classifying behavioural patterns of beavers from accelerometry data, we were able to distinguish seven behaviours; standing, walking, swimming, feeding, grooming, diving and sleeping. We show how to apply the use of acceleration to determine behaviour, and emphasise the ease with which this non-invasive method can be implemented. Furthermore, we discuss the strengths and weaknesses of this, and the implementation of accelerometry on animals, illustrating limitations, suggestions and solutions. Ultimately, this approach may also serve as a template facilitating studies on other animals with similar locomotor modes and deliver new insights into hitherto unknown aspects of behavioural ecology. PMID:26317623

  9. The Levels of Compliance of Physical Education Teachers with Professional Ethics Codes

    ERIC Educational Resources Information Center

    Ozbek, Oguz

    This study was a survey type research that aims at determining the levels of compliance with professional ethics by physical education staff who work at high schools. Participants were 465 physical education teachers and 398 high school principals. In this study, the measure of "professional ethics of physical education teachers" developed by the…

  10. Physics of Phase Space Matching for Staging Plasma and Traditional Accelerator Components Using Longitudinally Tailored Plasma Profiles

    NASA Astrophysics Data System (ADS)

    Xu, X. L.; Hua, J. F.; Wu, Y. P.; Zhang, C. J.; Li, F.; Wan, Y.; Pai, C.-H.; Lu, W.; An, W.; Yu, P.; Hogan, M. J.; Joshi, C.; Mori, W. B.

    2016-03-01

    Phase space matching between two plasma-based accelerator (PBA) stages and between a PBA and a traditional accelerator component is a critical issue for emittance preservation. The drastic differences of the transverse focusing strengths as the beam propagates between stages and components may lead to a catastrophic emittance growth even when there is a small energy spread. We propose using the linear focusing forces from nonlinear wakes in longitudinally tailored plasma density profiles to control phase space matching between sections with negligible emittance growth. Several profiles are considered and theoretical analysis and particle-in-cell simulations show how these structures may work in four different scenarios. Good agreement between theory and simulation is obtained, and it is found that the adiabatic approximation misses important physics even for long profiles.

  11. Physics of Phase Space Matching for Staging Plasma and Traditional Accelerator Components Using Longitudinally Tailored Plasma Profiles.

    PubMed

    Xu, X L; Hua, J F; Wu, Y P; Zhang, C J; Li, F; Wan, Y; Pai, C-H; Lu, W; An, W; Yu, P; Hogan, M J; Joshi, C; Mori, W B

    2016-03-25

    Phase space matching between two plasma-based accelerator (PBA) stages and between a PBA and a traditional accelerator component is a critical issue for emittance preservation. The drastic differences of the transverse focusing strengths as the beam propagates between stages and components may lead to a catastrophic emittance growth even when there is a small energy spread. We propose using the linear focusing forces from nonlinear wakes in longitudinally tailored plasma density profiles to control phase space matching between sections with negligible emittance growth. Several profiles are considered and theoretical analysis and particle-in-cell simulations show how these structures may work in four different scenarios. Good agreement between theory and simulation is obtained, and it is found that the adiabatic approximation misses important physics even for long profiles.

  12. Physics of Phase Space Matching for Staging Plasma and Traditional Accelerator Components Using Longitudinally Tailored Plasma Profiles.

    PubMed

    Xu, X L; Hua, J F; Wu, Y P; Zhang, C J; Li, F; Wan, Y; Pai, C-H; Lu, W; An, W; Yu, P; Hogan, M J; Joshi, C; Mori, W B

    2016-03-25

    Phase space matching between two plasma-based accelerator (PBA) stages and between a PBA and a traditional accelerator component is a critical issue for emittance preservation. The drastic differences of the transverse focusing strengths as the beam propagates between stages and components may lead to a catastrophic emittance growth even when there is a small energy spread. We propose using the linear focusing forces from nonlinear wakes in longitudinally tailored plasma density profiles to control phase space matching between sections with negligible emittance growth. Several profiles are considered and theoretical analysis and particle-in-cell simulations show how these structures may work in four different scenarios. Good agreement between theory and simulation is obtained, and it is found that the adiabatic approximation misses important physics even for long profiles. PMID:27058082

  13. Conceptual design of a 1013 -W pulsed-power accelerator for megajoule-class dynamic-material-physics experiments

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; Reisman, D. B.; Stoltzfus, B. S.; Austin, K. N.; Ao, T.; Benage, J. F.; Breden, E. W.; Cooper, R. A.; Cuneo, M. E.; Davis, J.-P.; Ennis, J. B.; Gard, P. D.; Greiser, G. W.; Gruner, F. R.; Haill, T. A.; Hutsel, B. T.; Jones, P. A.; LeChien, K. R.; Leckbee, J. J.; Lewis, S. A.; Lucero, D. J.; McKee, G. R.; Moore, J. K.; Mulville, T. D.; Muron, D. J.; Root, S.; Savage, M. E.; Sceiford, M. E.; Spielman, R. B.; Waisman, E. M.; Wisher, M. L.

    2016-07-01

    We have developed a conceptual design of a next-generation pulsed-power accelerator that is optimized for megajoule-class dynamic-material-physics experiments. Sufficient electrical energy is delivered by the accelerator to a physics load to achieve—within centimeter-scale samples—material pressures as high as 1 TPa. The accelerator design is based on an architecture that is founded on three concepts: single-stage electrical-pulse compression, impedance matching, and transit-time-isolated drive circuits. The prime power source of the accelerator consists of 600 independent impedance-matched Marx generators. Each Marx comprises eight 5.8-GW bricks connected electrically in series, and generates a 100-ns 46-GW electrical-power pulse. A 450-ns-long water-insulated coaxial-transmission-line impedance transformer transports the power generated by each Marx to a system of twelve 2.5-m-radius water-insulated conical transmission lines. The conical lines are connected electrically in parallel at a 66-cm radius by a water-insulated 45-post sextuple-post-hole convolute. The convolute sums the electrical currents at the outputs of the conical lines, and delivers the combined current to a single solid-dielectric-insulated radial transmission line. The radial line in turn transmits the combined current to the load. Since much of the accelerator is water insulated, we refer to it as Neptune. Neptune is 40 m in diameter, stores 4.8 MJ of electrical energy in its Marx capacitors, and generates 28 TW of peak electrical power. Since the Marxes are transit-time isolated from each other for 900 ns, they can be triggered at different times to construct-over an interval as long as 1 μ s -the specific load-current time history required for a given experiment. Neptune delivers 1 MJ and 20 MA in a 380-ns current pulse to an 18 -m Ω load; hence Neptune is a megajoule-class 20-MA arbitrary waveform generator. Neptune will allow the international scientific community to conduct dynamic

  14. Studies of the chromatic properties and dynamic aperture of the BNL colliding-beam accelerator. [PATRICIA particle tracking code

    SciTech Connect

    Dell, G.F.

    1983-01-01

    The PATRICIA particle tracking program has been used to study chromatic effects in the Brookhaven CBA (Colliding Beam Accelerator). The short term behavior of particles in the CBA has been followed for particle histories of 300 turns. Contributions from magnet multipoles characteristic of superconducting magnets and closed orbit errors have been included in determining the dynamic aperture of the CBA for on and off momentum particles. The width of the third integer stopband produced by the temperature dependence of magnetization induced sextupoles in the CBA cable dipoles is evaluated for helium distribution systems having periodicity of one and six. The stopband width at a tune of 68/3 is naturally zero for the system having a periodicity of six and is approx. 10/sup -4/ for the system having a periodicity of one. Results from theory are compared with results obtained with PATRICIA; the results agree within a factor of slightly more than two.

  15. Effect of physical training in cool and hot environments on +Gz acceleration tolerance in women

    NASA Technical Reports Server (NTRS)

    Brock, P. J.; Sciaraffa, D.; Greenleaf, J. E.

    1982-01-01

    Acceleration tolerance, plasma volume, and maximal oxygen uptake were measured in 15 healthy women before and after submaximal isotonic exercise training periods in cool and hot environments. The women were divided on the basis of age, maximal oxygen uptake, and +Gz tolerance into three groups: a group that exercised in heat (40.6 C), a group that exercised at a lower temperature (18.7 C), and a sedentary control group that functioned in the cool environment. There was no significant change in the +Gz tolerance in any group after training, and terminal heart rates were similar within each group. It is concluded that induction of moderate acclimation responses without increases in sweat rate or resting plasma volume has no influence on +Gz acceleration tolerance in women.

  16. Plasma physics. Stochastic electron acceleration during spontaneous turbulent reconnection in a strong shock wave.

    PubMed

    Matsumoto, Y; Amano, T; Kato, T N; Hoshino, M

    2015-02-27

    Explosive phenomena such as supernova remnant shocks and solar flares have demonstrated evidence for the production of relativistic particles. Interest has therefore been renewed in collisionless shock waves and magnetic reconnection as a means to achieve such energies. Although ions can be energized during such phenomena, the relativistic energy of the electrons remains a puzzle for theory. We present supercomputer simulations showing that efficient electron energization can occur during turbulent magnetic reconnection arising from a strong collisionless shock. Upstream electrons undergo first-order Fermi acceleration by colliding with reconnection jets and magnetic islands, giving rise to a nonthermal relativistic population downstream. These results shed new light on magnetic reconnection as an agent of energy dissipation and particle acceleration in strong shock waves. PMID:25722406

  17. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    SciTech Connect

    Stygar, W. A.; Awe, T. J.; Bennett, N L; Breden, E. W.; Campbell, E. M.; Clark, R. E.; Cooper, R. A.; Cuneo, M. E.; Ennis, J. B.; Fehl, D. L.; Genoni, T. C.; Gomez, M. R.; Greiser, G. W.; Gruner, F. R.; Herrmann, M. C.; Hutsel, B. T.; Jennings, C. A.; Jobe, D. O.; Jones, B. M.; Jones, M. C.; Jones, P. A.; Knapp, P. F.; Lash, J. S.; LeChien, K. R.; Leckbee, J. J.; Leeper, R. J.; Lewis, S. A.; Long, F. W.; Lucero, D. J.; Madrid, E. A.; Martin, M. R.; Matzen, M. K.; Mazarakis, M. G.; McBride, R. D.; McKee, G. R.; Miller, C. L.; Moore, J. K.; Mostrom, C. B.; Mulville, T. D.; Peterson, K. J.; Porter, J. L.; Reisman, D. B.; Rochau, G. A.; Rochau, G. E.; Rose, D. V.; Savage, M. E.; Sceiford, M. E.; Schmit, P. F.; Schneider, R. F.; Schwarz, J.; Sefkow, A. B.; Sinars, D. B.; Slutz, S. A.; Spielman, R. B.; Stoltzfus, B. S.; Thoma, C.; Vesey, R. A.; Wakeland, P. E.; Welch, D. R.; Wisher, M. L.; Woodworth, J. R.; Bailey, J. E.; Rovang, D. C.

    2015-11-30

    Here, we have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated by the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator’s water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator’s physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD

  18. Conceptual designs of two petawatt-class pulsed-power accelerators for high-energy-density-physics experiments

    DOE PAGES

    Stygar, W. A.; Awe, T. J.; Bennett, N L; Breden, E. W.; Campbell, E. M.; Clark, R. E.; Cooper, R. A.; Cuneo, M. E.; Ennis, J. B.; Fehl, D. L.; et al

    2015-11-30

    Here, we have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated bymore » the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator’s water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator’s physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD

  19. "Friluftsliv": A Contribution to Equity and Democracy in Swedish Physical Education? An Analysis of Codes in Swedish Physical Education Curricula

    ERIC Educational Resources Information Center

    Backman, Erik

    2011-01-01

    During the last decade, expanding research investigating the school subject Physical Education (PE) indicates a promotion of inequalities regarding which children benefit from PE teaching. Outdoor education and its Scandinavian equivalent "friluftsliv," is a part of the PE curriculum in many countries, and these practices have been claimed to have…

  20. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  1. Hadron Physics at the Charm and Bottom Thresholds and Other Novel QCD Physics Topics at the NICA Accelerator Facility

    SciTech Connect

    Brodsky, Stanley J.; /SLAC

    2012-06-20

    The NICA collider project at the Joint Institute for Nuclear Research in Dubna will have the capability of colliding protons, polarized deuterons, and nuclei at an effective nucleon-nucleon center-of mass energy in the range {radical}s{sub NN} = 4 to 11 GeV. I briefly survey a number of novel hadron physics processes which can be investigated at the NICA collider. The topics include the formation of exotic heavy quark resonances near the charm and bottom thresholds, intrinsic strangeness, charm, and bottom phenomena, hidden-color degrees of freedom in nuclei, color transparency, single-spin asymmetries, the RHIC baryon anomaly, and non-universal antishadowing.

  2. Opacity calculations for ICF target physics using the ABAKO/RAPCAL code

    NASA Astrophysics Data System (ADS)

    Mínguez, E.; Florido, R.; Rodriguez, R.; Gil, J. M.; Rubiano, J. G.; Mendoz, M. A.; Suárez, D.; Martel, P.

    2010-08-01

    In this work we present a set of atomic models (called ABAKO/RAPCAL), and its validation with experiments and with other NLTE models. We consider that our code permits the diagnosis and the determination of opacity data. A review of calculations and simulations for the validation of this set is presented.As an interesting product of these calculations, we can obtain accurate analytical formulas for Rosseland and Planck mean opacities. These formulas are useful for the use as input data in hydrodinamic simulations of targets where the computation task is so hard that in line computation with sophisticated opacity codes is prohibitive. Analytical opacities for several Z-plasmas are presented in this work.

  3. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  4. The High-Luminosity upgrade of the LHC: Physics and Technology Challenges for the Accelerator and the Experiments

    NASA Astrophysics Data System (ADS)

    Schmidt, Burkhard

    2016-04-01

    In the second phase of the LHC physics program, the accelerator will provide an additional integrated luminosity of about 2500/fb over 10 years of operation to the general purpose detectors ATLAS and CMS. This will substantially enlarge the mass reach in the search for new particles and will also greatly extend the potential to study the properties of the Higgs boson discovered at the LHC in 2012. In order to meet the experimental challenges of unprecedented pp luminosity, the experiments will need to address the aging of the present detectors and to improve the ability to isolate and precisely measure the products of the most interesting collisions. The lectures gave an overview of the physics motivation and described the conceptual designs and the expected performance of the upgrades of the four major experiments, ALICE, ATLAS, CMS and LHCb, along with the plans to develop the appropriate experimental techniques and a brief overview of the accelerator upgrade. Only some key points of the upgrade program of the four major experiments are discussed in this report; more information can be found in the references given at the end.

  5. Introduction to high-energy physics and the Stanford Linear Accelerator Center (SLAC)

    SciTech Connect

    Clearwater, S.

    1983-03-01

    The type of research done at SLAC is called High Energy Physics, or Particle Physics. This is basic research in the study of fundamental particles and their interactions. Basic research is research for the sake of learning something. Any practical application cannot be predicted, the understanding is the end in itself. Interactions are how particles behave toward one another, for example some particles attract one another while others repel and still others ignore each other. Interactions of elementary particles are studied to reveal the underlying structure of the universe.

  6. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  7. Black hole physics. Black hole lightning due to particle acceleration at subhorizon scales.

    PubMed

    Aleksić, J; Ansoldi, S; Antonelli, L A; Antoranz, P; Babic, A; Bangale, P; Barrio, J A; Becerra González, J; Bednarek, W; Bernardini, E; Biasuzzi, B; Biland, A; Blanch, O; Bonnefoy, S; Bonnoli, G; Borracci, F; Bretz, T; Carmona, E; Carosi, A; Colin, P; Colombo, E; Contreras, J L; Cortina, J; Covino, S; Da Vela, P; Dazzi, F; De Angelis, A; De Caneva, G; De Lotto, B; de Oña Wilhelmi, E; Delgado Mendez, C; Dominis Prester, D; Dorner, D; Doro, M; Einecke, S; Eisenacher, D; Elsaesser, D; Fonseca, M V; Font, L; Frantzen, K; Fruck, C; Galindo, D; García López, R J; Garczarczyk, M; Garrido Terrats, D; Gaug, M; Godinović, N; González Muñoz, A; Gozzini, S R; Hadasch, D; Hanabata, Y; Hayashida, M; Herrera, J; Hildebrand, D; Hose, J; Hrupec, D; Idec, W; Kadenius, V; Kellermann, H; Kodani, K; Konno, Y; Krause, J; Kubo, H; Kushida, J; La Barbera, A; Lelas, D; Lewandowska, N; Lindfors, E; Lombardi, S; Longo, F; López, M; López-Coto, R; López-Oramas, A; Lorenz, E; Lozano, I; Makariev, M; Mallot, K; Maneva, G; Mankuzhiyil, N; Mannheim, K; Maraschi, L; Marcote, B; Mariotti, M; Martínez, M; Mazin, D; Menzel, U; Miranda, J M; Mirzoyan, R; Moralejo, A; Munar-Adrover, P; Nakajima, D; Niedzwiecki, A; Nilsson, K; Nishijima, K; Noda, K; Orito, R; Overkemping, A; Paiano, S; Palatiello, M; Paneque, D; Paoletti, R; Paredes, J M; Paredes-Fortuny, X; Persic, M; Poutanen, J; Prada Moroni, P G; Prandini, E; Puljak, I; Reinthal, R; Rhode, W; Ribó, M; Rico, J; Rodriguez Garcia, J; Rügamer, S; Saito, T; Saito, K; Satalecka, K; Scalzotto, V; Scapin, V; Schultz, C; Schweizer, T; Shore, S N; Sillanpää, A; Sitarek, J; Snidaric, I; Sobczynska, D; Spanier, F; Stamatescu, V; Stamerra, A; Steinbring, T; Storz, J; Strzys, M; Takalo, L; Takami, H; Tavecchio, F; Temnikov, P; Terzić, T; Tescaro, D; Teshima, M; Thaele, J; Tibolla, O; Torres, D F; Toyama, T; Treves, A; Uellenbeck, M; Vogler, P; Zanin, R; Kadler, M; Schulz, R; Ros, E; Bach, U; Krauß, F; Wilms, J

    2014-11-28

    Supermassive black holes with masses of millions to billions of solar masses are commonly found in the centers of galaxies. Astronomers seek to image jet formation using radio interferometry but still suffer from insufficient angular resolution. An alternative method to resolve small structures is to measure the time variability of their emission. Here we report on gamma-ray observations of the radio galaxy IC 310 obtained with the MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes, revealing variability with doubling time scales faster than 4.8 min. Causality constrains the size of the emission region to be smaller than 20% of the gravitational radius of its central black hole. We suggest that the emission is associated with pulsar-like particle acceleration by the electric field across a magnetospheric gap at the base of the radio jet.

  8. Black hole physics. Black hole lightning due to particle acceleration at subhorizon scales.

    PubMed

    Aleksić, J; Ansoldi, S; Antonelli, L A; Antoranz, P; Babic, A; Bangale, P; Barrio, J A; Becerra González, J; Bednarek, W; Bernardini, E; Biasuzzi, B; Biland, A; Blanch, O; Bonnefoy, S; Bonnoli, G; Borracci, F; Bretz, T; Carmona, E; Carosi, A; Colin, P; Colombo, E; Contreras, J L; Cortina, J; Covino, S; Da Vela, P; Dazzi, F; De Angelis, A; De Caneva, G; De Lotto, B; de Oña Wilhelmi, E; Delgado Mendez, C; Dominis Prester, D; Dorner, D; Doro, M; Einecke, S; Eisenacher, D; Elsaesser, D; Fonseca, M V; Font, L; Frantzen, K; Fruck, C; Galindo, D; García López, R J; Garczarczyk, M; Garrido Terrats, D; Gaug, M; Godinović, N; González Muñoz, A; Gozzini, S R; Hadasch, D; Hanabata, Y; Hayashida, M; Herrera, J; Hildebrand, D; Hose, J; Hrupec, D; Idec, W; Kadenius, V; Kellermann, H; Kodani, K; Konno, Y; Krause, J; Kubo, H; Kushida, J; La Barbera, A; Lelas, D; Lewandowska, N; Lindfors, E; Lombardi, S; Longo, F; López, M; López-Coto, R; López-Oramas, A; Lorenz, E; Lozano, I; Makariev, M; Mallot, K; Maneva, G; Mankuzhiyil, N; Mannheim, K; Maraschi, L; Marcote, B; Mariotti, M; Martínez, M; Mazin, D; Menzel, U; Miranda, J M; Mirzoyan, R; Moralejo, A; Munar-Adrover, P; Nakajima, D; Niedzwiecki, A; Nilsson, K; Nishijima, K; Noda, K; Orito, R; Overkemping, A; Paiano, S; Palatiello, M; Paneque, D; Paoletti, R; Paredes, J M; Paredes-Fortuny, X; Persic, M; Poutanen, J; Prada Moroni, P G; Prandini, E; Puljak, I; Reinthal, R; Rhode, W; Ribó, M; Rico, J; Rodriguez Garcia, J; Rügamer, S; Saito, T; Saito, K; Satalecka, K; Scalzotto, V; Scapin, V; Schultz, C; Schweizer, T; Shore, S N; Sillanpää, A; Sitarek, J; Snidaric, I; Sobczynska, D; Spanier, F; Stamatescu, V; Stamerra, A; Steinbring, T; Storz, J; Strzys, M; Takalo, L; Takami, H; Tavecchio, F; Temnikov, P; Terzić, T; Tescaro, D; Teshima, M; Thaele, J; Tibolla, O; Torres, D F; Toyama, T; Treves, A; Uellenbeck, M; Vogler, P; Zanin, R; Kadler, M; Schulz, R; Ros, E; Bach, U; Krauß, F; Wilms, J

    2014-11-28

    Supermassive black holes with masses of millions to billions of solar masses are commonly found in the centers of galaxies. Astronomers seek to image jet formation using radio interferometry but still suffer from insufficient angular resolution. An alternative method to resolve small structures is to measure the time variability of their emission. Here we report on gamma-ray observations of the radio galaxy IC 310 obtained with the MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes, revealing variability with doubling time scales faster than 4.8 min. Causality constrains the size of the emission region to be smaller than 20% of the gravitational radius of its central black hole. We suggest that the emission is associated with pulsar-like particle acceleration by the electric field across a magnetospheric gap at the base of the radio jet. PMID:25378461

  9. Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2009-09-01

    PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a

  10. Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2009-12-01

    PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a

  11. Montecarlo simulation code in optimisation of the IntraOperative Radiation Therapy treatment with mobile dedicated accelerator

    NASA Astrophysics Data System (ADS)

    Catalano, M.; Agosteo, S.; Moretti, R.; Andreoli, S.

    2007-06-01

    The principle of optimisation of the EURATOM 97/43 directive foresees that for all medical exposure of individuals for radiotherapeutic purposes, exposures of target volumes shall be individually planned, taking into account that doses of non-target volumes and tissues shall be as low as reasonably achievable and consistent with the intended radiotherapeutic purpose of the exposure. Treatment optimisation has to be carried out especially in non conventional radiotherapic procedures, as Intra Operative Radiation Therapy (IORT) with mobile dedicated LINear ACcelerator (LINAC), which does not make use of a Treatment Planning System. IORT is carried out with electron beams and refers to the application of radiation during a surgical intervention, after the removal of a neoplastic mass and it can also be used as a one-time/stand alone treatment in initial cancer of small volume. IORT foresees a single session and a single beam only; therefore it is necessary to use protection systems (disks) temporary positioned between the target volume and the underlying tissues, along the beam axis. A single high Z shielding disk is used to stop the electrons of the beam at a certain depth and protect the tissues located below. Electron back scatter produces an enhancement in the dose above the disk, and this can be reduced if a second low Z disk is placed above the first. Therefore two protection disks are used in clinical application. On the other hand the dose enhancement at the interface of the high Z disk and the target, due to back scattering radiation, can be usefully used to improve the uniformity in treatment of thicker target volumes. Furthermore the dose above the disks of different Z material has to be evaluated in order to study the optimal combination of shielding disks that allow both to protect the underlying tissues and to obtain the most uniform dose distribution in target volumes of different thicknesses. The dose enhancement can be evaluated using the electron

  12. Accelerated Integrated Science Sequence (AISS): An Introductory Biology, Chemistry, and Physics Course

    ERIC Educational Resources Information Center

    Purvis-Roberts, Kathleen L.; Edwalds-Gilbert, Gretchen; Landsberg, Adam S.; Copp, Newton; Ulsh, Lisa; Drew, David E.

    2009-01-01

    A new interdisciplinary, introductory science course was offered for the first time during the 2007-2008 school year. The purpose of the course is to introduce students to the idea of working at the intersections of biology, chemistry, and physics and to recognize interconnections between the disciplines. Interdisciplinary laboratories are a key…

  13. Accelerating Translation of Physical Activity and Cancer Survivorship Research into Practice: Recommendations for a More Integrated and Collaborative Approach

    PubMed Central

    Phillips, Siobhan M.; Alfano, Catherine M.; Perna, Frank M.; Glasgow, Russell E.

    2015-01-01

    Physical activity has been deemed safe and effective in reducing many negative side effects of treatment for cancer survivors and promoting better overall health. However, most of this research has focused on highly controlled randomized trials and little of this research has been translated into care or policy for survivors. The purpose of the present paper is to present a research agenda for the field to accelerate the dissemination and implementation of empirically-supported physical activity interventions into care. We provide rationale for the role of basic, behavioral, clinical implementation and population scientists in moving this science forward and call for a more coordinated effort across different phases of research. In addition, we provide key strategies and examples for ongoing and future studies using the RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation and Maintenance) framework and pose recommendations for collaborations between researchers and stakeholders to enhance the integration of this research into policy and practice. Overall, we recommend that physical activity and cancer survivorship research employ additional study designs, include relevant stakeholders and be more collaborative, integrated, contextual, and representative in terms of both setting and participants. PMID:24599577

  14. Physical basis for the ofloxacin-induced acceleration of lysozyme aggregation and polymorphism in amyloid fibrils.

    PubMed

    Muthu, Shivani A; Mothi, Nivin; Shiriskar, Sonali M; Pissurlenkar, Raghuvir R S; Kumar, Anil; Ahmad, Basir

    2016-02-15

    Aggregation of globular proteins is an intractable problem which generally originates from partially folded structures. The partially folded structures first collapse non-specifically and then reorganize into amyloid-like fibrils via one or more oligomeric intermediates. The fibrils and their on/off pathway intermediates may be toxic to cells and form toxic deposits in different human organs. To understand the basis of origins of the aggregation diseases, it is vital to study in details the conformational properties of the amyloidogenic partially folded structures of the protein. In this work, we examined the effects of ofloxacin, a synthetic fluoroquinolone compound on the fibrillar aggregation of hen egg-white lysozyme. Using two aggregation conditions (4M GuHCl at pH 7.0 and 37 °C; and pH 1.7 at 65 °C) and a number of biophysical techniques, we illustrate that ofloxacin accelerates fibril formation of lysozyme by binding to partially folded structures and modulating their secondary, tertiary structures and surface hydrophobicity. We also demonstrate that Ofloxacin-induced fibrils show polymorphism of morphology, tinctorial properties and hydrophobic surface exposure. This study will assist in understanding the determinant of fibril formation and it also indicates that caution should be exercised in the use of ofloxacin in patients susceptible to various aggregation diseases.

  15. Unravelling the hidden DNA structural/physical code provides novel insights on promoter location.

    PubMed

    Durán, Elisa; Djebali, Sarah; González, Santi; Flores, Oscar; Mercader, Josep Maria; Guigó, Roderic; Torrents, David; Soler-López, Montserrat; Orozco, Modesto

    2013-08-01

    Although protein recognition of DNA motifs in promoter regions has been traditionally considered as a critical regulatory element in transcription, the location of promoters, and in particular transcription start sites (TSSs), still remains a challenge. Here we perform a comprehensive analysis of putative core promoter sequences relative to non-annotated predicted TSSs along the human genome, which were defined by distinct DNA physical properties implemented in our ProStar computational algorithm. A representative sampling of predicted regions was subjected to extensive experimental validation and analyses. Interestingly, the vast majority proved to be transcriptionally active despite the lack of specific sequence motifs, indicating that physical signaling is indeed able to detect promoter activity beyond conventional TSS prediction methods. Furthermore, highly active regions displayed typical chromatin features associated to promoters of housekeeping genes. Our results enable to redefine the promoter signatures and analyze the diversity, evolutionary conservation and dynamic regulation of human core promoters at large-scale. Moreover, the present study strongly supports the hypothesis of an ancient regulatory mechanism encoded by the intrinsic physical properties of the DNA that may contribute to the complexity of transcription regulation in the human genome. PMID:23761436

  16. Basic physical and chemical information needed for development of Monte Carlo codes

    SciTech Connect

    Inokuti, M.

    1993-08-01

    It is important to view track structure analysis as an application of a branch of theoretical physics (i.e., statistical physics and physical kinetics in the language of the Landau school). Monte Carlo methods and transport equation methods represent two major approaches. In either approach, it is of paramount importance to use as input the cross section data that best represent the elementary microscopic processes. Transport analysis based on unrealistic input data must be viewed with caution, because results can be misleading. Work toward establishing the cross section data, which demands a wide scope of knowledge and expertise, is being carried out through extensive international collaborations. In track structure analysis for radiation biology, the need for cross sections for the interactions of electrons with DNA and neighboring protein molecules seems to be especially urgent. Finally, it is important to interpret results of Monte Carlo calculations fully and adequately. To this end, workers should document input data as thoroughly as possible and report their results in detail in many ways. Workers in analytic transport theory are then likely to contribute to the interpretation of the results.

  17. Accelerator beam data commissioning equipment and procedures: Report of the TG-106 of the Therapy Physics Committee of the AAPM

    SciTech Connect

    Das, Indra J.; Cheng, C.-W.; Watts, Ronald J.; Ahnesjoe, Anders; Gibbons, John; Li, X. Allen; Lowenstein, Jessica; Mitra, Raj K.; Simon, William E.; Zhu, Timothy C.

    2008-09-15

    For commissioning a linear accelerator for clinical use, medical physicists are faced with many challenges including the need for precision, a variety of testing methods, data validation, the lack of standards, and time constraints. Since commissioning beam data are treated as a reference and ultimately used by treatment planning systems, it is vitally important that the collected data are of the highest quality to avoid dosimetric and patient treatment errors that may subsequently lead to a poor radiation outcome. Beam data commissioning should be performed with appropriate knowledge and proper tools and should be independent of the person collecting the data. To achieve this goal, Task Group 106 (TG-106) of the Therapy Physics Committee of the American Association of Physicists in Medicine was formed to review the practical aspects as well as the physics of linear accelerator commissioning. The report provides guidelines and recommendations on the proper selection of phantoms and detectors, setting up of a phantom for data acquisition (both scanning and no-scanning data), procedures for acquiring specific photon and electron beam parameters and methods to reduce measurement errors (<1%), beam data processing and detector size convolution for accurate profiles. The TG-106 also provides a brief discussion on the emerging trend in Monte Carlo simulation techniques in photon and electron beam commissioning. The procedures described in this report should assist a qualified medical physicist in either measuring a complete set of beam data, or in verifying a subset of data before initial use or for periodic quality assurance measurements. By combining practical experience with theoretical discussion, this document sets a new standard for beam data commissioning.

  18. The Development of Biomedical Applications of Nuclear Physics Detector Technology at the Thomas Jefferson National Accelerator Facility

    NASA Astrophysics Data System (ADS)

    Weisenberger, Andrew

    2003-10-01

    The Southeastern Universities Research Association (SURA) operates the Thomas Jefferson National Accelerator Facility (Jefferson Lab) for the United States Department of Energy. As a user facility for physicists worldwide, its primary mission is to conduct basic nuclear physics research of the atom's nucleus at the quark level. Within the Jefferson Lab Physics Division is the Jefferson Lab Detector Group which was formed to support the design and construction of new detector systems during the construction phase of the major detector systems at Jefferson Lab and to act as technical consultants for the lab scientists and users. The Jefferson Lab Detector Group, headed by Dr. Stan Majewski, has technical capabilities in the development and use of radiation detection systems. These capabilities include expertise in nuclear particle detection through the use of gas detectors, scintillation and light guide techniques, standard and position-sensitive photomultiplier tubes (PSPMTs), fast analog readout electronics and data acquisition, and on-line image formation and analysis. In addition to providing nuclear particle detector support to the lab, the group has for several years (starting in 1996) applied these technologies to the development of novel high resolution gamma-ray imaging systems for biomedical applications and x-ray imaging techniques. The Detector Group has developed detector systems for breast cancer detection, brain cancer therapy and small animal imaging to support biomedical research. An overview will be presented of how this small nuclear physics detector research group by teaming with universities, medical facilities, industry and other national laboratories applies technology originating from basic nuclear physics research to biomedical applications.

  19. Physics of the Dayside Magnetosphere: New Results From a Hybrid Kinetic Code

    NASA Technical Reports Server (NTRS)

    Siebeck, D. G.; Omidi, N.

    2007-01-01

    We use a global hybrid code kinetic model to demonstrate how kinetic processes at the bow shock and within the foreshock can dramatically modify the solar wind just before its interaction with the magnetosphere. During periods of steady radial interplanetary magnetic field (IMF) orientation, the foreshock fills with a diffuse population of suprathermal ions. The ions generate cavities marked by enhanced temperatures, depressed densities, and diminished magnetic field strengths that convect antisunward into the bow shock with the solar wind flow. Tangential discontinuities marked by inward-pointing electric fields and normals transverse to the Sun-Earth line generate hot flow anomalies marked by hot tenuous plasmas bounded by outward propagating shocks. When the motional electric field in the magnetosheath points inward towards the Earth, a solitary bow shock appears. For typical IMF orientations, the solitary shocks should appear at poorly sampled high latitudes, but for strongly northward or southward IMF orientations the solitary shocks should appear on the flanks of the magnetosphere. Although quasi-perpendicular, solitary shocks should be marked by turbulent magnetosheath flows, often directed towards the Sun-Earth line, and abrupt spike-like enhancements in the density and magnetic field strength at the shock. Finally,we show how flux transfer events generated between parallel subsolar reconnection lines are destroyed upon encountering the magnetopause at latitudes above the cusp.

  20. Physical and mechanical metallurgy of high purity Nb for accelerator cavities

    NASA Astrophysics Data System (ADS)

    Bieler, T. R.; Wright, N. T.; Pourboghrat, F.; Compton, C.; Hartwig, K. T.; Baars, D.; Zamiri, A.; Chandrasekaran, S.; Darbandi, P.; Jiang, H.; Skoug, E.; Balachandran, S.; Ice, G. E.; Liu, W.

    2010-03-01

    In the past decade, high Q values have been achieved in high purity Nb superconducting radio frequency (SRF) cavities. Fundamental understanding of the physical metallurgy of Nb that enables these achievements is beginning to reveal what challenges remain to establish reproducible and cost-effective production of high performance SRF cavities. Recent studies of dislocation substructure development and effects of recrystallization arising from welding and heat treatments and their correlations with cavity performance are considered. With better fundamental understanding of the effects of dislocation substructure evolution and recrystallization on electron and phonon conduction, as well as the interior and surface states, it will be possible to design optimal processing paths for cost-effective performance using approaches such as hydroforming, which minimizes or eliminates welds in a cavity.

  1. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1994-01-01

    New calculations of the acoustic wave energy fluxes generated in the solar convective zone have been performed. The treatment of convective turbulence in the sun and solar-like stars, in particular, the precise nature of the turbulent power spectrum has been recognized as one of the most important issues in the wave generation problem. Several different functional forms for spatial and temporal spectra have been considered in the literature and differences between the energy fluxes obtained for different forms often exceed two orders of magnitude. The basic criterion for choosing the appropriate spectrum was the maximal efficiency of the wave generation. We have used a different approach based on physical and empirical arguments as well as on some results from numerical simulation of turbulent convection.

  2. Community petascale project for accelerator science and simulation : Advancing computational science for future accelerators and accelerator technologies.

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L. C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.

    2008-01-01

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R & D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  3. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  4. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    NASA Astrophysics Data System (ADS)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael H.; Sobolevsky, Nikolai; Thomsen, Bjarne; Bassler, Niels

    2015-03-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi-Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi-Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  5. The Rosslyn Code: Can Physics Explain a 500-Year Old Melody Etched in the Walls of a Scottish Chapel?

    SciTech Connect

    Wilson, Chris

    2011-10-19

    For centuries, historians have puzzled over a series of 213 symbols carved into the stone of Scotland’s Rosslyn Chapel. (Disclaimer: You may recognize this chapel from The Da Vinci Code, but this is real and unrelated!) Several years ago, a composer and science enthusiast noticed that the symbols bore a striking similarity to Chladni patterns, the elegant images that form on a two- dimensional surface when it vibrates at certain frequencies. This man’s theory: A 500-year-old melody was inscribed in the chapel using the language of physics. But not everyone is convinced. Slate senior editor Chris Wilson travelled to Scotland to investigate the claims and listen to this mysterious melody, whatever it is. Come find out what he discovered, including images of the patterns and audio of the music they inspired.

  6. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    NASA Astrophysics Data System (ADS)

    Sabchevski, S.; Zhelyazkov, I.; Benova, E.; Atanassov, V.; Dankov, P.; Thumm, M.; Arnold, A.; Jin, J.; Rzesnicki, T.

    2006-07-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems.

  7. Accelerating efforts to prevent childhood obesity: spreading, scaling, and sustaining healthy eating and physical activity.

    PubMed

    Chang, Debbie I; Gertel-Rosenberg, Allison; Snyder, Kim

    2014-12-01

    During the past decade, progress has been made in addressing childhood obesity through policy and practice changes that encourage increased physical activity and access to healthy food. With the implementation of these strategies, an understanding of what works to prevent childhood obesity is beginning to emerge. The task now is to consider how best to spread, scale, and sustain promising childhood obesity prevention strategies. In this article we examine a project led by Nemours, a children's health system, to address childhood obesity. We describe Nemours's conceptual approach to spreading, scaling, and sustaining a childhood obesity prevention intervention. We review a component of a Nemours initiative in Delaware that focused on early care and education settings and its expansion to other states through the National Early Care and Education Learning Collaborative to prevent childhood obesity. We also discuss lessons learned. Focusing on the spreading, scaling, and sustaining of promising strategies has the potential to increase the reach and impact of efforts in obesity prevention and help ensure their impact on population health.

  8. J-PAS: The Javalambre-Physics of the Accelerated Universe Astrophysical Survey

    NASA Astrophysics Data System (ADS)

    Benítez, N.; Dupke, R.; Moles, M.; Sodré, L.; Cenarro, A. J.; Marín Franch, A.; Taylor, K.; Cristóbal, D.; Fernández-Soto, A.; Mendes de Oliveira, C.; Cepa-Nogué, J.; Abramo, L. R.; Alcaniz, J. S.; Overzier, R.; Hernández-Monteagudo, C.; Alfaro, E. J.; Kanaan, A.; Carvano, M.; Reis, R. R. R.; J-PAS Collaboration

    2015-05-01

    J-PAS is a Spanish-Brazilian 8500 deg^2 Cosmological Survey which will be carried out from the Javalambre Observatory with a purpose-built, dedicated 2.5 m telescope and a 4.7 deg^2 camera with 1.2 Gpix. Starting in 2015, J-PAS will use 59 filters to measure high precision 0.003(1+z) photometric redshifts for 90M galaxies plus several million QSOs, about 50 times more than the largest current spectroscopic survey, sampling an effective volume of ˜ 14 Gpc^3 up to z=1.3. J-PAS will not only be first radial BAO experiment to reach Stage IV; it will also detect and measure the mass of 7× 10^5 galaxy clusters and groups, setting constrains on Dark Energy which rival those obtained from BAO measurements. The combination of a set of 145 Å NB filters, placed 100 Å apart, and a multi-degree field of view is a powerful ``redshift machine'', equivalent to a 4000 multiplexing spectrograph, but many times cheaper to build. The J-PAS camera is equivalent to a very large, 4.7 deg^2 ``IFU'', which will produce a time-resolved, 3D image of the Northern Sky with a very wide range of scientific applications in Galaxy Evolution, Stellar Physics and the Solar System.

  9. Multi-processor developments in the United States for future high energy physics experiments and accelerators

    SciTech Connect

    Gaines, I.

    1988-03-01

    The use of multi-processors for analysis and high-level triggering in High Energy Physics experiments, pioneered by the early emulator systems, has reached maturity, in particular with the multiple microprocessor systems in use at Fermilab. It is widely acknowledged that such systems will fulfill the major portion of the computing needs of future large experiments. Recent developments at Fermilab's Advanced Computer Program will make such systems even more powerful, cost-effective, and easier to use than they are at present. The next generation of microprocessors, already available, will provide CPU power of about one VAX 780 equivalent/$300, while supporting most VMS FORTRAN extensions and large (>8MB) amounts of memory. Low cost high density mass storage devices (based on video tape cartridge technology) will allow parallel I/O to remove potential I/O bottlenecks in systems of over 1000 VAX equipment processors. New interconnection schemes and system software will allow more flexible topologies and extremely high data bandwidth, especially for on-line systems. This talk will summarize the work at the Advanced Computer Program and the rest of the US in this field. 3 refs., 4 figs.

  10. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    SciTech Connect

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N. Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  11. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  12. Integration of the DRAGON5/DONJON5 codes in the SALOME platform for performing multi-physics calculations in nuclear engineering

    NASA Astrophysics Data System (ADS)

    Hébert, Alain

    2014-06-01

    We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.

  13. Wind-US Code Physical Modeling Improvements to Complement Hypersonic Testing and Evaluation

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.; Towne, Charles S.; Engblom, William A.; Bhagwandin, Vishal A.; Power, Greg D.; Lankford, Dennis W.; Nelson, Christopher C.

    2009-01-01

    This report gives an overview of physical modeling enhancements to the Wind-US flow solver which were made to improve the capabilities for simulation of hypersonic flows and the reliability of computations to complement hypersonic testing. The improvements include advanced turbulence models, a bypass transition model, a conjugate (or closely coupled to vehicle structure) conduction-convection heat transfer capability, and an upgraded high-speed combustion solver. A Mach 5 shock-wave boundary layer interaction problem is used to investigate the benefits of k- s and k-w based explicit algebraic stress turbulence models relative to linear two-equation models. The bypass transition model is validated using data from experiments for incompressible boundary layers and a Mach 7.9 cone flow. The conjugate heat transfer method is validated for a test case involving reacting H2-O2 rocket exhaust over cooled calorimeter panels. A dual-mode scramjet configuration is investigated using both a simplified 1-step kinetics mechanism and an 8-step mechanism. Additionally, variations in the turbulent Prandtl and Schmidt numbers are considered for this scramjet configuration.

  14. CTH: A three-dimensional large deformation, strong shock wave physics code

    NASA Astrophysics Data System (ADS)

    McGlaun, J. M.; Thompson, S. L.; Elrick, M. G.

    1988-09-01

    CTH is a software system under development at Sandia National Laboratories Albuquerque to model multidimensional, multi-material, large deformation, strong shock wave physics. One-dimensional, two-dimensional, and three-dimensional Eulerian meshes are currently available. CTH uses tabular or analytic equations of state that model solid, liquid, vapor, plasma, and mixed-phase materials. CTH can model elastic-plastic behavior, high explosives, fracture, and motion of fragments smaller than a computational cell. CTH was carefully structured to vectorize and multitask on the CRAY X-MP. Three-dimensional databases reside on the CRAY solid state disk with only five planes in core at once. The input/output to the solid state disk is overlapped with computations so there is no penalty for using the solid state disk. This allows very large problems to be run effectively. A sophisticated post-processor, CTHED, has been developed for interactive analysis using color graphics. This paper describes the architecture, database structure, models, and novel features of CTH. Special emphasis will be placed on the features that are novel to CTH or are not direct generalizations of two-dimensional models.

  15. CTH: A three-dimensional large deformation, strong shock wave physics code

    SciTech Connect

    McGlaun, J.M.; Thompson, S.L.; Elrick, M.G.

    1988-01-01

    CTH is a software system under development at Sandia National Laboratories Albuquerque to model multidimensional, multi-material, large deformation, strong shock wave physics. One-dimensional, two-dimensional, and three-dimensional Eulerian meshes are currently available. CTH uses tabular or analytic equations of state that model solid, liquid, vapor, plasma, and mixed-phase materials. CTH can model elastic-plastic behavior, high explosives, fracture, and motion of fragments smaller than a computational cell. CTH was carefully structured to vectorize and multitask on the CRAY X-MP. Three-dimensional databases reside on the CRAY solid state disk with only five planes in core at once. The input/output to the solid state disk is overlapped with computations so there is no penalty for using the solid state disk. This allows very large problems to be run effectively. A sophisticated post-processor, CTHED, has been developed for interactive analysis using color graphics. This paper describes the architecture, database structure, models, and novel features of CTH. Special emphasis will be place on the features that are novel to CTH or are not direct generalizations of two-dimensional models. 8 refs., 1 fig.

  16. Comparison of the physical optics code with the GOIE method and the direct solution of Maxwell equations obtained by FDTD

    NASA Astrophysics Data System (ADS)

    Konoshonkin, Alexander V.; Kustova, Natalia V.; Borovoi, Anatoli G.; Ishimoto, Hiroshi; Masuda, Kazuhiko; Okamoto, Hajime

    2015-11-01

    A comparison of the physical optics code and GOIE method to solve the problem of light scattering by hexagonal ice crystals has been presented. It was found that in the case of diffraction on a hole in the perpendicular screen, both methods give the same diffraction scattering cross section for the diffraction angles up to 60 degrees. The polarization elements of the Mueller matrix in this case differ significantly even for the angles of 15-30 degrees. It is also shown that in the case of diffraction on the tilted screen, the difference between these methods may be significant. The comparison of the results with the exact solution obtained by FDTD has confirmed that the difference between these methods is not significant for the case of diffraction on the perpendicular screen, but it is slightly preferable to use the GOIE for the calculations. The good agreement with the exact solution confirms the possibility of using the method of physical optics to solve the problem of light scattering by particles with characteristic size greater than 10 microns.

  17. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  18. GPU/MIC Acceleration of the LHC High Level Trigger to Extend the Physics Reach at the LHC

    SciTech Connect

    Halyo, Valerie; Tully, Christopher

    2015-04-14

    The quest for rare new physics phenomena leads the PI [3] to propose evaluation of coprocessors based on Graphics Processing Units (GPUs) and the Intel Many Integrated Core (MIC) architecture for integration into the trigger system at LHC. This will require development of a new massively parallel implementation of the well known Combinatorial Track Finder which uses the Kalman Filter to accelerate processing of data from the silicon pixel and microstrip detectors and reconstruct the trajectory of all charged particles down to momentums of 100 MeV. It is expected to run at least one order of magnitude faster than an equivalent algorithm on a quad core CPU for extreme pileup scenarios of 100 interactions per bunch crossing. The new tracking algorithms will be developed and optimized separately on the GPU and Intel MIC and then evaluated against each other for performance and power efficiency. The results will be used to project the cost of the proposed hardware architectures for the HLT server farm, taking into account the long term projections of the main vendors in the market (AMD, Intel, and NVIDIA) over the next 10 years. Extensive experience and familiarity of the PI with the LHC tracker and trigger requirements led to the development of a complementary tracking algorithm that is described in [arxiv: 1305.4855], [arxiv: 1309.6275] and preliminary results accepted to JINST.

  19. Nonlinear Acceleration Methods for Even-Parity Neutron Transport

    SciTech Connect

    W. J. Martin; C. R. E. De Oliveira; H. Park

    2010-05-01

    Convergence acceleration methods for even-parity transport were developed that have the potential to speed up transport calculations and provide a natural avenue for an implicitly coupled multiphysics code. An investigation was performed into the acceleration properties of the introduction of a nonlinear quasi-diffusion-like tensor in linear and nonlinear solution schemes. Using the tensor reduced matrix as a preconditioner for the conjugate gradients method proves highly efficient and effective. The results for the linear and nonlinear case serve as the basis for further research into the application in a full three-dimensional spherical-harmonics even-parity transport code. Once moved into the nonlinear solution scheme, the implicit coupling of the convergence accelerated transport method into codes for other physics can be done seamlessly, providing an efficient, fully implicitly coupled multiphysics code with high order transport.

  20. A 0.18 μm CMOS transmit physical coding sublayer IC for 100G Ethernet

    NASA Astrophysics Data System (ADS)

    Weihua, Ruan; Qingsheng, Hu

    2016-03-01

    This paper presents a transmit physical coding sublayer (PCS) circuit for 100G Ethernet. Based on the 4 × 25 Gb/s architecture according to the IEEE P802.3ba and IEEE P802.3bmTM/D1.1 standards, this PCS circuit is designed using a semi-custom design method and consists of 4 modules including 64B/66B encoder, scrambler, multiple lanes distribution and 66 : 8 gearbox. By using the pipeline structure and several optimization techniques, the working speed of the circuit is increased significantly. The parallel scrambling combined with logic optimization also improve the performance. In addition, a kind of phase-independent structure is employed in the design of the gearbox to ensure it can work stably and reliably at high frequency. This PCS circuit has been fabricated based on 0.18 μm CMOS technology and the total area is 1.7 × 1.7 mm2. Measured results show that the circuit can work properly at 100 Gb/s and the power consumption is about 284 mW with a 1.8 V supply. Project supported by the National Natural Science Foundation of China (No. 6504000129) and the National Basic Research Program of China (No. 6504000052).

  1. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    SciTech Connect

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  2. EDITORIAL: Laser and plasma accelerators Laser and plasma accelerators

    NASA Astrophysics Data System (ADS)

    Bingham, Robert

    2009-02-01

    as photon deceleration and acceleration and is the result of a modulational instability. Simulations reported by Trines et al using a photon-in-cell code or wave kinetic code agree extremely well with experimental observation. Ion acceleration is actively studied; for example the papers by Robinson, Macchi, Marita and Tripathi all discuss different types of acceleration mechanisms from direct laser acceleration, Coulombic explosion and double layers. Ion acceleration is an exciting development that may have great promise in oncology. The surprising application is in muon acceleration, demonstrated by Peano et al who show that counterpropagating laser beams with variable frequencies drive a beat structure with variable phase velocity, leading to particle trapping and acceleration with possible application to a future muon collider and neutrino factory. Laser and plasma accelerators remain one of the exciting areas of plasma physics with applications in many areas of science ranging from laser fusion, novel high-brightness radiation sources, particle physics and medicine. The guest editor would like to thank all authors and referees for their invaluable contributions to this special issue.

  3. Cardiac acceleration at the onset of exercise: a potential parameter for monitoring progress during physical training in sports and rehabilitation.

    PubMed

    Hettinga, Florentina J; Monden, Paul G; van Meeteren, Nico L U; Daanen, Hein A M

    2014-05-01

    There is a need for easy-to-use methods to assess training progress in sports and rehabilitation research. The present review investigated whether cardiac acceleration at the onset of physical exercise (HRonset) can be used as a monitoring variable. The digital databases of Scopus and PubMed were searched to retrieve studies investigating HRonset. In total 652 studies were retrieved. These articles were then classified as having emphasis on HRonset in a sports or rehabilitation setting, which resulted in 8 of 112 studies with a sports application and 6 of 68 studies with a rehabilitation application that met inclusion criteria. Two co-existing mechanisms underlie HRonset: feedforward (central command) and feedback (mechanoreflex, metaboreflex, baroreflex) control. A number of studies investigated HRonset during the first few seconds of exercise (HRonsetshort), in which central command and the mechanoreflex determine vagal withdrawal, the major mechanism by which heart rate (HR) increases. In subsequent sports and rehabilitation studies, interest focused on HRonset during dynamic exercise over a longer period of time (HRonsetlong). Central command, mechanoreflexes, baroreflexes, and possibly metaboreflexes contribute to HRonset during the first seconds and minutes of exercise, which in turn leads to further vagal withdrawal and an increase in sympathetic activity. HRonset has been described as the increase in HR compared with resting state (delta HR) or by exponential modeling, with measurement intervals ranging from 0-4 s up to 2 min. Delta HR was used to evaluate HRonsetshort over the first 4 s of exercise, as well as for analyzing HRonsetlong. In exponential modeling, the HR response to dynamic exercise is biphasic, consisting of fast (parasympathetic, 0-10 s) and slow (sympathetic, 1-4 min) components. Although available studies differed largely in measurement protocols, cross-sectional and longitudinal training studies showed that studies analyzing HRonset

  4. The relation between tilt table and acceleration-tolerance and their dependence on stature and physical fitness

    NASA Technical Reports Server (NTRS)

    Klein, K. E.; Backhausen, F.; Bruner, H.; Eichhorn, J.; Jovy, D.; Schotte, J.; Vogt, L.; Wegman, H. M.

    1980-01-01

    A group of 12 highly trained athletes and a group of 12untrained students were subjected to passive changes of position on a tilt table and positive accelerations in a centrifuge. During a 20 min tilt, including two additional respiratory maneuvers, the number of faints and average cardiovascular responses did not differ significantly between the groups. During linear increase of acceleration, the average blackout level was almost identical in both groups. Statistically significant coefficients of product-moment correlation for various relations were obtained. The coefficient of multiple determination computed for the dependence of acceleration tolerance on heart-eye distance and systolic blood pressure at rest allows the explanation of almost 50% of the variation of acceleration tolerance. The maximum oxygen uptake showed the expected significant correlation to the heart rate at rest, but not the acceleration tolerance, or to the cardiovascular responses to tilting.

  5. Review of multi-dimensional large-scale kinetic simulation and physics validation of ion acceleration in relativistic laser-matter interaction

    SciTech Connect

    Wu, Hui-Chun; Hegelich, B.M.; Fernandez, J.C.; Shah, R.C.; Palaniyappan, S.; Jung, D.; Yin, L; Albright, B.J.; Bowers, K.; Huang, C.; Kwan, T.J.

    2012-06-19

    Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.

  6. Simulation Code Development and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Zenghai

    2015-10-01

    Under the support of the U.S. DOE SciDAC program, SLAC has been developing a suite of 3D parallel finite-element codes aimed at high-accuracy, high-fidelity electromagnetic and beam physics simulations for the design and optimization of next-generation particle accelerators. Running on the latest supercomputers, these codes have made great strides in advancing the state of the art in applied math and computer science at the petascale that enable the integrated modeling of electromagnetics, self-consistent Particle-In-Cell (PIC) particle dynamics as well as thermal, mechanical, and multi-physics effects. This paper will present the latest development and application of ACE3P to a wide range of accelerator projects.

  7. BOOK REVIEW Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr

    NASA Astrophysics Data System (ADS)

    Carr, Bernard

    2011-02-01

    General relativity is arguably the most beautiful scientific theory ever conceived but its status within mainstream physics has vacillated since it was proposed in 1915. It began auspiciously with the successful explanation of the precession of Mercury and the dramatic confirmation of light-bending in the 1919 solar eclipse expedition, which turned Einstein into an overnight celebrity. Though little noticed at the time, there was also Karl Schwarzschild's discovery of the spherically symmetric solution in 1916 (later used to predict the existence of black holes) and Alexander Friedmann's discovery of the cosmological solution in 1922 (later confirmed by the discovery of the cosmic expansion). Then for 40 years the theory was more or less forgotten, partly because most physicists were turning their attention to the even more radical developments of quantum theory but also because the equations were too complicated to solve except in situations involving special symmetries or very weak gravitational fields (where general relativity is very similar to Newtonian theory). Furthermore, it was not clear that strong gravitational fields would ever arise in the real universe and, even if they did, it seemed unlikely that Einstein's equations could then be solved. So research in relativity became a quiet backwater as mainstream physics swept forward in other directions. Even Einstein lost interest, turning his attention to the search for a unified field theory. This book tells the remarkable story of how the tide changed in 1963, when the 28-year-old New Zealand mathematician Roy Kerr discovered an exact solution of Einstein's equations which represents a rotating black hole, thereby cracking the code of the title. The paper was just a few pages long, it being left for others to fill in the extensive beautiful mathematics which underlay the result, but it ushered in a golden age of relativity and is now one of the most cited works in physics. Coincidentally, Kerr

  8. Computational studies and optimization of wakefield accelerators

    SciTech Connect

    Tsung, Frank S.; Bruhwiler, David L.; Cary, John R.; Esarey, Eric H.; Mori, Warren B.; Vay, Jean-Luc; Martins, Samuel F.; Katsouleas, Tom; Cormier-Michel, Estelle; Fawley, William M.; Huang, Chengkun; Wang, Xiadong; Cowan, Ben; Decyk, Victor K.; Fonseca, Ricardo A.; Lu, Wei; Messmer, Peter; Mullowney, Paul; Nakamura, Kei; Paul, Kevin; Plateau, Guillaume R.; Schroeder, Carl B.; Silva, Luis O.; Toth, Csaba; Geddes, C.G.R.; Tzoufras, Michael; Antonsen, Tom; Vieira, Jorge; Leemans, Wim P.

    2008-06-16

    Laser- and particle beam-driven plasma wakefield accelerators produce accelerating fields thousands of times higher than radio-frequency accelerators, offering compactness and ultrafast bunches to extend the frontiers of high energy physics and to enable laboratory-scale radiation sources. Large-scale kinetic simulations provide essential understanding of accelerator physics to advance beam performance and stability and show and predict the physics behind recent demonstration of narrow energy spread bunches. Benchmarking between codes is establishing validity of the models used and, by testing new reduced models, is extending the reach of simulations to cover upcoming meter-scale multi-GeV experiments. This includes new models that exploit Lorentz boosted simulation frames to speed calculations. Simulations of experiments showed that recently demonstrated plasma gradient injection of electrons can be used as an injector to increase beam quality by orders of magnitude. Simulations are now also modeling accelerator stages of tens of GeV, staging of modules, and new positron sources to design next-generation experiments and to use in applications in high energy physics and light sources.

  9. Induction technology optimization code

    SciTech Connect

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-08-21

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. The Induction Technology Optimization Study (ITOS) was undertaken to examine viable combinations of a linear induction accelerator and a relativistic klystron (RK) for high power microwave production. It is proposed, that microwaves from the RK will power a high-gradient accelerator structure for linear collider development. Previous work indicates that the RK will require a nominal 3-MeV, 3-kA electron beam with a 100-ns flat top. The proposed accelerator-RK combination will be a high average power system capable of sustained microwave output at a 300-Hz pulse repetition frequency. The ITOS code models many combinations of injector, accelerator, and pulse power designs that will supply an RK with the beam parameters described above.

  10. Simulations of a meter-long plasma wakefield accelerator

    SciTech Connect

    Lee, S.; Katsouleas, T.; Hemker, R.; Mori, W. B.

    2000-06-01

    Full-scale particle-in-cell simulations of a meter-long plasma wakefield accelerator (PWFA) are presented in two dimensions. The results support the design of a current PWFA experiment in the nonlinear blowout regime where analytic solutions are intractable. A relativistic electron bunch excites a plasma wake that accelerates trailing particles at rates of several hundred MeV/m. A comparison is made of various simulation codes, and a parallel object-oriented code OSIRIS is used to model a full meter of acceleration. Excellent agreement is obtained between the simulations and analytic expressions for the transverse betatron oscillations of the beam. The simulations are used to develop scaling laws for designing future multi-GeV accelerator experiments. (c) 2000 The American Physical Society.

  11. Policy Challenges in the Fight against Childhood Obesity: Low Adherence in San Diego Area Schools to the California Education Code Regulating Physical Education

    PubMed Central

    Consiglieri, G.; Leon-Chi, L.; Newfield, R. S.

    2013-01-01

    Objective. Assess the adherence to the Physical Education (PE) requirements per California Education Code in San Diego area schools. Methods. Surveys were administered anonymously to children and adolescents capable of physical activity, visiting a specialty clinic at Rady Children's Hospital San Diego. The main questions asked were their gender, grade, PE classes per week, and time spent doing PE. Results. 324 surveys were filled, with 36 charter-school students not having to abide by state code excluded. We report on 288 students (59% females), mostly Hispanic (43%) or Caucasian (34%). In grades 1–6, 66.7% reported under the 200 min per 10 school days required by the PE code. Only 20.7% had daily PE. Average PE days/week was 2.6. In grades 7–12, 42.2% had reported under the 400 min per 10 school days required. Daily PE was noted in 47.8%. Average PE days/week was 3.4. Almost 17% had no PE, more so in the final two grades of high school (45.7%). Conclusions. There is low adherence to the California Physical Education mandate in the San Diego area, contributing to poor fitness and obesity. Lack of adequate PE is most evident in grades 1–6 and grades 11-12. Better resources, awareness, and enforcement are crucial. PMID:23762537

  12. BOOK REVIEW Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr

    NASA Astrophysics Data System (ADS)

    Carr, Bernard

    2011-02-01

    General relativity is arguably the most beautiful scientific theory ever conceived but its status within mainstream physics has vacillated since it was proposed in 1915. It began auspiciously with the successful explanation of the precession of Mercury and the dramatic confirmation of light-bending in the 1919 solar eclipse expedition, which turned Einstein into an overnight celebrity. Though little noticed at the time, there was also Karl Schwarzschild's discovery of the spherically symmetric solution in 1916 (later used to predict the existence of black holes) and Alexander Friedmann's discovery of the cosmological solution in 1922 (later confirmed by the discovery of the cosmic expansion). Then for 40 years the theory was more or less forgotten, partly because most physicists were turning their attention to the even more radical developments of quantum theory but also because the equations were too complicated to solve except in situations involving special symmetries or very weak gravitational fields (where general relativity is very similar to Newtonian theory). Furthermore, it was not clear that strong gravitational fields would ever arise in the real universe and, even if they did, it seemed unlikely that Einstein's equations could then be solved. So research in relativity became a quiet backwater as mainstream physics swept forward in other directions. Even Einstein lost interest, turning his attention to the search for a unified field theory. This book tells the remarkable story of how the tide changed in 1963, when the 28-year-old New Zealand mathematician Roy Kerr discovered an exact solution of Einstein's equations which represents a rotating black hole, thereby cracking the code of the title. The paper was just a few pages long, it being left for others to fill in the extensive beautiful mathematics which underlay the result, but it ushered in a golden age of relativity and is now one of the most cited works in physics. Coincidentally, Kerr

  13. The direction of acceleration

    NASA Astrophysics Data System (ADS)

    Wilhelm, Thomas; Burde, Jan-Philipp; Lück, Stephan

    2015-11-01

    Acceleration is a physical quantity that is difficult to understand and hence its complexity is often erroneously simplified. Many students think of acceleration as equivalent to velocity, a ˜ v. For others, acceleration is a scalar quantity, which describes the change in speed Δ|v| or Δ|v|/Δt (as opposed to the change in velocity). The main difficulty with the concept of acceleration therefore lies in developing a correct understanding of its direction. The free iOS app AccelVisu supports students in acquiring a correct conception of acceleration by showing acceleration arrows directly at moving objects.

  14. Simulating ion beam extraction from a single aperture triode acceleration column: A comparison of the beam transport codes IGUN and PBGUNS with test stand data

    SciTech Connect

    Patel, A.; Wills, J. S. C.; Diamond, W. T.

    2008-04-15

    Ion beam extraction from two different ion sources with single aperture triode extraction columns was simulated with the particle beam transport codes PBGUNS and IGUN. For each ion source, the simulation results are compared to experimental data generated on well-equipped test stands. Both codes reproduced the qualitative behavior of the extracted ion beams to incremental and scaled changes to the extraction electrode geometry observed on the test stands. Numerical values of optimum beam currents and beam emittance generated by the simulations also agree well with test stand data.

  15. Cross-sectional association of the number of neighborhood facilities assessed using postal code with objectively measured physical activity: the Saku cohort study.

    PubMed

    Yasunaga, Akitomo; Murakami, Haruka; Morita, Akemi; Deura, Kijyo; Aiba, Naomi; Watanabe, Shaw; Miyachi, Motohiko

    2016-01-01

    Objectives The aim of this study was to examine the association between the number of neighborhood facilities that were assessed according to postal code and objectively measured physical activity by using an accelerometer in community-dwelling Japanese people.Methods The participants included 1,274 Japanese people aged 30-84 years from the Saku cohort study. As neighborhood facilities related to physical activity, we extracted information regarding train stations, supermarkets/convenience stores, postal offices/banks, hospitals/clinics, public offices/community centers, cultural facilities/public children's houses, parks, and sports facilities by using each participant's postal code from the online version of the iTownPages directory published by Nippon Telegraph and Telephone Corporation (NTT) and the official homepage of the Saku City Government Office. We measured each participant's physical activity level using an accelerometer, and calculated the average daily step count and the average weekly period of moderate-to-vigorous intensity (≥3 metabolic equivalents of tasks [METs]) physical activity. The association between two selected physical activity-related variables and the numbers of eight types of neighborhood facilities were analyzed by multivariate logistic regression analysis for people aged 30-64 years and for those aged over 65 years.Results On multivariate logistic regression analysis, meeting the 23 METs h/week of moderate-to-vigorous intensity physical activity was significantly and positively associated with the number of supermarkets/convenience stores in the neighborhood in both age groups. In addition, meeting the desired daily step count outlined in the Japanese National Health Promotion guidelines was positively related to the number of postal offices/banks for people aged over 65 years.Conclusion The results of this study suggest that a sufficient number of neighborhood facilities (i.e., stores, banks, and postal offices) is closely

  16. Cross-sectional association of the number of neighborhood facilities assessed using postal code with objectively measured physical activity: the Saku cohort study.

    PubMed

    Yasunaga, Akitomo; Murakami, Haruka; Morita, Akemi; Deura, Kijyo; Aiba, Naomi; Watanabe, Shaw; Miyachi, Motohiko

    2016-01-01

    Objectives The aim of this study was to examine the association between the number of neighborhood facilities that were assessed according to postal code and objectively measured physical activity by using an accelerometer in community-dwelling Japanese people.Methods The participants included 1,274 Japanese people aged 30-84 years from the Saku cohort study. As neighborhood facilities related to physical activity, we extracted information regarding train stations, supermarkets/convenience stores, postal offices/banks, hospitals/clinics, public offices/community centers, cultural facilities/public children's houses, parks, and sports facilities by using each participant's postal code from the online version of the iTownPages directory published by Nippon Telegraph and Telephone Corporation (NTT) and the official homepage of the Saku City Government Office. We measured each participant's physical activity level using an accelerometer, and calculated the average daily step count and the average weekly period of moderate-to-vigorous intensity (≥3 metabolic equivalents of tasks [METs]) physical activity. The association between two selected physical activity-related variables and the numbers of eight types of neighborhood facilities were analyzed by multivariate logistic regression analysis for people aged 30-64 years and for those aged over 65 years.Results On multivariate logistic regression analysis, meeting the 23 METs h/week of moderate-to-vigorous intensity physical activity was significantly and positively associated with the number of supermarkets/convenience stores in the neighborhood in both age groups. In addition, meeting the desired daily step count outlined in the Japanese National Health Promotion guidelines was positively related to the number of postal offices/banks for people aged over 65 years.Conclusion The results of this study suggest that a sufficient number of neighborhood facilities (i.e., stores, banks, and postal offices) is closely

  17. Accelerate Implementation of the WHO Global Code of Practice on International Recruitment of Health Personnel: Experiences From the South East Asia Region

    PubMed Central

    Tangcharoensathien, Viroj; Travis, Phyllida

    2016-01-01

    Strengthening the health workforce and universal health coverage (UHC) are among key targets in the heath-related Sustainable Development Goals (SDGs) to be committed by the United Nations (UN) Member States in September 2015. The health workforce, the backbone of health systems, contributes to functioning delivery systems. Equitable distribution of functioning services is indispensable to achieve one of the UHC goals of equitable access. This commentary argues the World Health Organization (WHO) Global Code of Practice on International Recruitment of Health Personnel is relevant to the countries in the South East Asia Region (SEAR) as there is a significant outflow of health workers from several countries and a significant inflow in a few, increased demand for health workforce in high- and middle-income countries, and slow progress in addressing the "push factors." Awareness and implementation of the Code in the first report in 2012 was low but significantly improved in the second report in 2015. An inter-country workshop in 2015 convened by WHO SEAR to review progress in implementation of the Code was an opportunity for countries to share lessons on policy implementation, on retention of health workers, scaling up health professional education and managing in and out migration. The meeting noted that capturing outmigration of health personnel, which is notoriously difficult for source countries, is possible where there is an active recruitment management through government to government (G to G) contracts or licensing the recruiters and mandatory reporting requirement by them. According to the 2015 second report on the Code, the size and profile of outflow health workers from SEAR source countries is being captured and now also increasingly being shared by destination country professional councils. This is critical information to foster policy action and implementation of the Code in the Region. PMID:26673648

  18. Estimation of (41)Ar production in 0.1-1.1.0-GeV proton accelerator vaults using FLUKA Monte Carlo code.

    PubMed

    Biju, K; Sunil, C; Sarkar, P K

    2013-12-01

    The FLUKA Monte Carlo simulations are carried out to estimate the (41)Ar concentration inside accelerator vaults of various sizes when proton beams of energy 0.1-1.0 GeV are incident on thick copper and lead targets. Generally (41)Ar concentration is estimated using an empirical formula suggested in the NCRP 144, which assumes the activation is caused only by thermal neutrons alone. It is found that while the analytical and Monte Carlo techniques give similar results for the thermal neutron fluence inside the vault, the (41)Ar concentration is under-predicted by the empirical formula. It is also found that the thermal neutrons contribute ∼41 % to the total (41)Ar production while 56 % production is caused by neutrons between 0.025 and 1 eV. A modified factor is suggested for the use in the empirical expression to estimate the (41)Ar activity 0.1-1.0-GeV proton accelerator enclosures.

  19. Smartphones as Experimental Tools: Different Methods to Determine the Gravitational Acceleration in Classroom Physics by Using Everyday Devices

    ERIC Educational Resources Information Center

    Kuhn, Jochen; Vogt, Patrik

    2013-01-01

    New media technology becomes more and more important for our daily life as well as for teaching physics. Within the scope of our N.E.T. research project we develop experiments using New Media Experimental Tools (N.E.T.) in physics education and study their influence on students learning abilities. We want to present the possibilities e.g. of…

  20. Reliability of Pre-Service Physical Education Teachers' Coding of Teaching Videos Using Studiocode[R] Analysis Software

    ERIC Educational Resources Information Center

    Prusak, Keven; Dye, Brigham; Graham, Charles; Graser, Susan

    2010-01-01

    This study examines the coding reliability and accuracy of pre-service teachers in a teaching methods class using digital video (DV)-based teaching episodes and Studiocode analysis software. Student self-analysis of DV footage may offer a high tech solution to common shortfalls of traditional systematic observation and reflection practices by…

  1. Modeling Ion Acceleration Using LSP

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew

    This thesis presents the development of simulations modeling ion acceleration using the particle-in-cell code LSP. A new technique was developed to model the Target Normal Sheath Acceleration (TNSA) mechanism. Multiple simulations are performed, each optimized for a certain part of the TNSA process with appropriate information being passed from one to the next. The technique allows for tradeoffs between accuracy and speed. Physical length and timescales are met when necessary and different physical models are employed as needed. This TNSA modeling technique is used to perform a study on the effect front-surface structures have on the resulting ion acceleration. The front-surface structures tested have been shown to either modify the electron kinetic energy spectrum by increasing the maximum energy obtained or by increasing the overall coupling of laser energy to electron energy. Both of these types of front-surface structures are tested for their potential benefits for the accelerated ions. It is shown that optimizing the coupling of laser energy to electron energy is more important than producing extremely energetic electrons in the case of the TNSA ions. Simulations modeling the interaction of an intense laser with very thin (<100 nm thick) liquid crystal targets, modeled for the first time, are presented. Modeling this interaction is difficult and the effect of different simulation design choices is explored in depth. In particular, it is shown that the initial electron temperature used in the simulation has a significant effect on the resulting ion acceleration and light transmitted through the target. This behavior is explored through numerous 1D simulations.

  2. Plasma-based accelerator structures

    SciTech Connect

    Schroeder, Carl B.

    1999-12-01

    Plasma-based accelerators have the ability to sustain extremely large accelerating gradients, with possible high-energy physics applications. This dissertation further develops the theory of plasma-based accelerators by addressing three topics: the performance of a hollow plasma channel as an accelerating structure, the generation of ultrashort electron bunches, and the propagation of laser pulses is underdense plasmas.

  3. Accelerator Technology Division

    NASA Astrophysics Data System (ADS)

    1992-04-01

    In fiscal year (FY) 1991, the Accelerator Technology (AT) division continued fulfilling its mission to pursue accelerator science and technology and to develop new accelerator concepts for application to research, defense, energy, industry, and other areas of national interest. This report discusses the following programs: The Ground Test Accelerator Program; APLE Free-Electron Laser Program; Accelerator Transmutation of Waste; JAERI, OMEGA Project, and Intense Neutron Source for Materials Testing; Advanced Free-Electron Laser Initiative; Superconducting Super Collider; The High-Power Microwave Program; (Phi) Factory Collaboration; Neutral Particle Beam Power System Highlights; Accelerator Physics and Special Projects; Magnetic Optics and Beam Diagnostics; Accelerator Design and Engineering; Radio-Frequency Technology; Free-Electron Laser Technology; Accelerator Controls and Automation; Very High-Power Microwave Sources and Effects; and GTA Installation, Commissioning, and Operations.

  4. Preliminary assessment of existing experimental data for validation ofreactor physics codes and data for NGNP design and analysis.

    SciTech Connect

    Terry, W. K.; Jewell, J. K.; Briggs, J. B.; Taiwo, T. A.; Park, W.S.; Khalil, H. S.

    2005-10-25

    The Next Generation Nuclear Plant (NGNP), a demonstration reactor and hydrogen production facility proposed for construction at the INEEL, is expected to be a high-temperature gas-cooled reactor (HTGR). Computer codes used in design and safety analysis for the NGNP must be benchmarked against experimental data. The INEEL and ANL have examined information about several past and present experimental and prototypical facilities based on HTGR concepts to assess the potential of these facilities for use in this benchmarking effort. Both reactors and critical facilities applicable to pebble-bed and prismatic block-type cores have been considered. Four facilities--HTR-PROTEUS, HTR-10, ASTRA, and AVR--appear to have the greatest potential for use in benchmarking codes for pebble-bed reactors. Similarly, for the prismatic block-type reactor design, two experiments have been ranked as having the highest priority--HTTR and VHTRC.

  5. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  6. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  7. Towards a heavy-ion transport capability in the MARS15 Code

    SciTech Connect

    Mokhov, N. V.; Gudima, K. K.; Mashnik, S. G.; Rakhno, I. L.; Striganov, S.

    2004-04-01

    In order to meet the challenges of new accelerator and space projects and further improve modelling of radiation effects in microscopic objects, heavy-ion interaction and transport physics have been recently incorporated into the MARS15 Monte Carlo code. A brief description of new modules is given in comparison with experimental data. The MARS Monte Carlo code is widely used in numerous accelerator, detector, shielding and cosmic ray applications. The needs of the Relativistic Heavy-Ion Collider, Large Hadron Collider, Rare Isotope Accelerator and NASA projects have recently induced adding heavy-ion interaction and transport physics to the MARS15 code. The key modules of the new implementation are described below along with their comparisons to experimental data.

  8. AN INTEGRAL REACTOR PHYSICS EXPERIMENT TO INFER ACTINIDE CAPTURE CROSS-SECTIONS FROM THORIUM TO CALIFORNIUM WITH ACCELERATOR MASS SPECTROMETRY

    SciTech Connect

    G. Youinou; M. Salvatores; M. Paul; R. Pardo; G. Palmiotti; F. Kondev; G. Imel

    2010-04-01

    The principle of the proposed experiment is to irradiate very pure actinide samples in the Advanced Test Reactor (ATR) at INL and, after a given time, determine the amount of the different transmutation products. The determination of the nuclide densities before and after neutron irradiation will allow inference of effective neutron capture cross-sections. This approach has been used in the past and the novelty of this experiment is that the atom densities of the different transmutation products will be determined using the Accelerator Mass Spectroscopy (AMS) technique at the ATLAS facility located at ANL. It is currently planned to irradiate the following isotopes: 232Th, 235U, 236U, 238U, 237Np, 238Pu, 239Pu, 240Pu, 241Pu, 242Pu, 241Am, 243Am and 248Cm.

  9. Physics design for the ATA (Advanced Test Accelerator) tapered wiggler 10. 6. mu. FEL (Free-Electron Laser) amplifier experiment

    SciTech Connect

    Fawley, W.M.

    1985-05-09

    The design and construction of a high-gain, tapered wiggler 10.6 ..mu.. Free Electron Laser (FEL) amplifier to operate with the 50 MeV e-beam is underway. This report discussed the FEL simulation and the physics motivations behind the tapered wiggler design and initial experimental diagnostics.

  10. Accelerators Beyond The Tevatron?

    SciTech Connect

    Lach, Joseph; /Fermilab

    2010-07-01

    Following the successful operation of the Fermilab superconducting accelerator three new higher energy accelerators were planned. They were the UNK in the Soviet Union, the LHC in Europe, and the SSC in the United States. All were expected to start producing physics about 1995. They did not. Why?

  11. Accelerators Beyond The Tevatron?

    SciTech Connect

    Lach, Joseph

    2010-07-29

    Following the successful operation of the Fermilab superconducting accelerator three new higher energy accelerators were planned. They were the UNK in the Soviet Union, the LHC in Europe, and the SSC in the United States. All were expected to start producing physics about 1995. They did not. Why?.

  12. The Intercomparison of 3D Radiation Codes (I3RC): Showcasing Mathematical and Computational Physics in a Critical Atmospheric Application

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Cahalan, R. F.

    2001-05-01

    The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards

  13. Some calculator programs for particle physics. [LEGENDRE, ASSOCIATED LEGENDRE, CONFIDENCE, TWO BODY, ELLIPSE, DALITZ RECTANGULAR, and DALITZ TRIANGULAR codes

    SciTech Connect

    Wohl, C.G.

    1982-01-01

    Seven calculator programs that do simple chores that arise in elementary particle physics are given. LEGENDRE evaluates the Legendre polynomial series ..sigma..a/sub n/P/sub n/(x) at a series of values of x. ASSOCIATED LEGENDRE evaluates the first-associated Legendre polynomial series ..sigma..b/sub n/P/sub n//sup 1/(x) at a series of values of x. CONFIDENCE calculates confidence levels for chi/sup 2/, Gaussian, or Poisson probability distributions. TWO BODY calculates the c.m. energy, the initial- and final-state c.m. momenta, and the extreme values of t and u for a 2-body reaction. ELLIPSE calculates coordinates of points for drawing an ellipse plot showing the kinematics of a 2-body reaction or decay. DALITZ RECTANGULAR calculates coordinates of points on the boundary of a rectangular Dalitz plot. DALITZ TRIANGULAR calculates coordinates of points on the boundary of a triangular Dalitz plot. There are short versions of CONFIDENCE (EVEN N and POISSON) that calculate confidence levels for the even-degree-of-freedom-chi/sup 2/ and the Poisson cases, and there is a short version of TWO BODY (CM) that calculates just the c.m. energy and initial-state momentum. The programs are written for the HP-97 calculator. (WHK)

  14. Physics.

    ERIC Educational Resources Information Center

    Bromley, D. Allan

    1980-01-01

    The author presents the argument that the past few years, in terms of new discoveries, insights, and questions raised, have been among the most productive in the history of physics. Selected for discussion are some of the most important new developments in physics research. (Author/SA)

  15. Physics Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1980

    1980-01-01

    Presents nine physics notes for British secondary school teachers. Some of these notes are: (1) speed of sound in a steel rod; (2) physics extracts-part four (1978); and (3) a graphical approach to acceleration. (HM)

  16. Ion Induction Accelerators

    NASA Astrophysics Data System (ADS)

    Barnard, John J.; Horioka, Kazuhiko

    The description of beams in RF and induction accelerators share many common features. Likewise, there is considerable commonality between electron induction accelerators (see Chap. 7) and ion induction accelerators. However, in contrast to electron induction accelerators, there are fewer ion induction accelerators that have been operated as application-driven user facilities. Ion induction accelerators are envisioned for applications (see Chap. 10) such as Heavy Ion Fusion (HIF), High Energy Density Physics (HEDP), and spallation neutron sources. Most ion induction accelerators constructed to date have been limited scale facilities built for feasibility studies for HIF and HEDP where a large numbers of ions are required on target in short pulses. Because ions are typically non-relativistic or weakly relativistic in much of the machine, space-charge effects can be of crucial importance. This contrasts the situation with electron machines, which are usually strongly relativistic leading to weaker transverse space-charge effects and simplified longitudinal dynamics. Similarly, the bunch structure of ion induction accelerators relative to RF machines results in significant differences in the longitudinal physics.

  17. Overview of accelerators in medicine

    SciTech Connect

    Lennox, A.J. |

    1993-06-01

    Accelerators used for medicine include synchrotrons, cyclotrons, betatrons, microtrons, and electron, proton, and light ion linacs. Some accelerators which were formerly found only at physics laboratories are now being considered for use in hospital-based treatment and diagnostic facilities. This paper presents typical operating parameters for medical accelerators and gives specific examples of clinical applications for each type of accelerator, with emphasis on recent developments in the field.

  18. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  19. FAA Smoke Transport Code

    SciTech Connect

    Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos

    2006-10-27

    FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.

  20. Rare Isotope Accelerators

    NASA Astrophysics Data System (ADS)

    Savard, Guy

    2002-04-01

    The next frontier for low-energy nuclear physics involves experimentation with accelerated beams of short-lived radioactive isotopes. A new facility, the Rare Isotope Accelerator (RIA), is proposed to produce large amount of these rare isotopes and post-accelerate them to energies relevant for studies in nuclear physics, astrophysics and the study of fundamental interactions at low energy. The basic science motivation for this facility will be introduced. The general facility layout, from the 400 kW heavy-ion superconducting linac used for production of the required isotopes to the novel production and extraction schemes and the highly efficient post-accelerator, will be presented. Special emphasis will be put on a number of technical breakthroughs and recent R&D results that enable this new facility.

  1. Physics

    NASA Astrophysics Data System (ADS)

    Campbell, Norman Robert

    2013-03-01

    Preface; Introduction; Part I. The Propositions of Science: 1. The subject matter of science; 2. The nature of laws; 3. The nature of laws (contd); 4. The discovery and proof of laws; 5. The explanation of laws; 6. Theories; 7. Chance and probability; 8. The meaning of science; 9. Science and philosophy; Part II. Measurement: 10. Fundamental measurement; 11. Physical number; 12. Fractional and negative magnitudes; 13. Numerical laws and derived magnitudes; 14. Units and dimensions; 15. The uses of dimensions; 16. Errors of measurement; methodical errors; 17. Errors of measurement; errors of consistency and the adjustment of observations; 18. Mathematical physics; Appendix; Index.

  2. Nuclear Physics Issues in Space Radiation Risk Assessment-The FLUKA Monte Carlo Transport Code Used for Space Radiation Measurement and Protection

    SciTech Connect

    Lee, K. T.

    2007-02-12

    The long term human exploration goals that NASA has embraced, requires the need to understand the primary radiation and secondary particle production under a variety of environmental conditions. In order to perform accurate transport simulations for the incident particles found in the space environment, accurate nucleus-nucleus inelastic event generators are needed, and NASA is funding their development. For the first time, NASA is including the radiation problem into the . design of the next manned exploration vehicle. The NASA-funded FLUER-S (FLUKA Executing Under ROOT-Space) project has several goals beyond the improvement of the internal nuclear physics simulations. These include making FLUKA more user-friendly. Several tools have been developed to simplify the use of FLUKA without compromising its accuracy or versatility. Among these tools are a general source input, ability of distributive computing, simplification of geometry input, geometry and event visualization, and standard FLUKA scoring output analysis using a ROOT GUI. In addition to describing these tools we will show how they have been used for space radiation environment data analysis in MARIE, IVCPDS, and EVCPDS. Similar analyses can be performed for future radiation measurement detectors before they are deployed in order to optimize their design. These tools can also be used in the design of nuclear-based power systems on manned exploration vehicles and planetary surfaces. In addition to these space applications, the simulations are being used to support accelerator based experiments like the cross-section measurements being performed at HIMAC and NSRL at BNL.

  3. GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders

    SciTech Connect

    Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed

    2013-06-01

    We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.

  4. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  5. Large electrostatic accelerators

    SciTech Connect

    Jones, C.M.

    1984-01-01

    The increasing importance of energetic heavy ion beams in the study of atomic physics, nuclear physics, and materials science has partially or wholly motivated the construction of a new generation of large electrostatic accelerators designed to operate at terminal potentials of 20 MV or above. In this paper, the author briefly discusses the status of these new accelerators and also discusses several recent technological advances which may be expected to further improve their performance. The paper is divided into four parts: (1) a discussion of the motivation for the construction of large electrostatic accelerators, (2) a description and discussion of several large electrostatic accelerators which have been recently completed or are under construction, (3) a description of several recent innovations which may be expected to improve the performance of large electrostatic accelerators in the future, and (4) a description of an innovative new large electrostatic accelerator whose construction is scheduled to begin next year. Due to time and space constraints, discussion is restricted to consideration of only tandem accelerators.

  6. Beam Breakup Effects in Dielectric Based Accelerators

    SciTech Connect

    Schoessow, P.; Kanareykin, A.; Jing, C.; Kustov, A.; Altmark, A.; Power, J. G.; Gai, W.

    2009-01-22

    The dynamics of the beam in structure-based wakefield accelerators leads to beam stability issues not ordinarily found in other machines. In particular, the high current drive beam in an efficient wakefield accelerator loses a large fraction of its energy in the decelerator structure, resulting in physical emittance growth, increased energy spread, and the possibility of head-tail instability for an off axis beam, all of which can lead to severe reduction of beam intensity. Beam breakup (BBU) effects resulting from parasitic wakefields provide a potentially serious limitation to the performance of dielectric structure based wakefield accelerators as well. We report on experimental and numerical investigation of BBU and its mitigation. The experimental program focuses on BBU measurements at the AWA facility in a number of high gradient and high transformer ratio wakefield devices. New pickup-based beam diagnostics will provide methods for studying parasitic wakefields that are currently unavailable. The numerical part of this research is based on a particle-Green's function beam breakup code we are developing that allows rapid, efficient simulation of beam breakup effects in advanced linear accelerators. The goal of this work is to be able to compare the results of detailed experimental measurements with the accurate numerical results and to design an external FODO channel for the control of the beam in the presence of strong transverse wakefields.

  7. Diagnostics for induction accelerators

    SciTech Connect

    Fessenden, T.J.

    1996-04-01

    The induction accelerator was conceived by N. C. Christofilos and first realized as the Astron accelerator that operated at LLNL from the early 1960`s to the end of 1975. This accelerator generated electron beams at energies near 6 MeV with typical currents of 600 Amperes in 400 ns pulses. The Advanced Test Accelerator (ATA) built at Livermore`s Site 300 produced 10,000 Ampere beams with pulse widths of 70 ns at energies approaching 50 MeV. Several other electron and ion induction accelerators have been fabricated at LLNL and LBNL. This paper reviews the principal diagnostics developed through efforts by scientists at both laboratories for measuring the current, position, energy, and emittance of beams generated by these high current, short pulse accelerators. Many of these diagnostics are closely related to those developed for other accelerators. However, the very fast and intense current pulses often require special diagnostic techniques and considerations. The physics and design of the more unique diagnostics developed for electron induction accelerators are presented and discussed in detail.

  8. Accelerator Technology Division annual report, FY 1989

    SciTech Connect

    Not Available

    1990-06-01

    This paper discusses: accelerator physics and special projects; experiments and injectors; magnetic optics and beam diagnostics; accelerator design and engineering; radio-frequency technology; accelerator theory and simulation; free-electron laser technology; accelerator controls and automation; and high power microwave sources and effects.

  9. Advances and future needs in particle production and transport code developments

    SciTech Connect

    Mokhov, N.V.; /Fermilab

    2009-12-01

    The next generation of accelerators and ever expanding needs of existing accelerators demand new developments and additions to Monte-Carlo codes, with an emphasis on enhanced modeling of elementary particle and heavy-ion interactions and transport. Challenges arise from extremely high beam energies and beam power, increasing complexity of accelerators and experimental setups, as well as design, engineering and performance constraints. All these put unprecedented requirements on the accuracy of particle production predictions, the capability and reliability of the codes used in planning new accelerator facilities and experiments, the design of machine, target and collimation systems, detectors and radiation shielding and minimization of their impact on environment. Recent advances in widely-used general-purpose all-particle codes are described for the most critical modules such as particle production event generators, elementary particle and heavy ion transport in an energy range which spans up to 17 decades, nuclide inventory and macroscopic impact on materials, and dealing with complex geometry of accelerator and detector structures. Future requirements for developing physics models and Monte-Carlo codes are discussed.

  10. Computer assisted accelerator tuning

    SciTech Connect

    Boyd, J.K.

    1993-04-14

    The challenge of tuning an induction accelerator in real time has been addressed with the new TUNE GUIDE code. The code initializes a beam at a particular position using a tracer particle representation of the phase space. The particles are transported, using a matrix formulation, element by element along the beamline assuming that the field of a solenoid, or steering element is constant over its length. The other allowed elements are gaps and drift sections. A great deal of effort has been spent programming TUNE GUIDE to operate under the IBMPC Windows 3.1 system. This system features an intuitive, menu driven interface, which provides an ability to rapidly change beamline component parameter values. Consequently various accelerator setups can be explored and new values determined in real time while the accelerator is operating. In addition the code has the capability of varying a capability value over a range and then plotting the resulting beam properties, such as radius or centroid position, at a down stream position. Element parameter editing is also included along with an on-line hyper text oriented help package.

  11. TRACK : the new beam dynamics code.

    SciTech Connect

    Aseev, V. N.; Ostroumov, P. N.; Lessner, E. S.; Mustapha, B.; Physics

    2005-01-01

    The new ray-tracing code TRACK originally developed to fulfill the special requirements of the RIA accelerator systems is a general beam dynamics code. It is currently being used for the design and simulation of future proton and heavy-ion linacs at several Labs. This paper presents a general description of the code TRACK emphasizing its main new features and recent updates.

  12. Acceleration radioisotope production simulations

    SciTech Connect

    Waters, L.S.; Wilson, W.B.

    1996-12-31

    We have identified 96 radionuclides now being used or under consideration for use in medical applications. Previously, we calculated the production of {sup 99}Mo from enriched and depleted uranium targets at the 800-MeV energy used in the LAMPF accelerator at Los Alamos. We now consider the production of isotopes using lower energy beams, which may become available as a result of new high-intensity spallation target accelerators now being planned. The production of four radionuclides ({sup 7}Be, {sup 67}Cu, {sup 99}Mo, and {sup 195m}Pt) in a simplified proton accelerator target design is being examined. The LAHET, MCNP, and CINDER90 codes were used to model the target, transport a beam of protons and secondary produced particles through the system, and compute the nuclide production from spallation and low-energy neutron interactions. Beam energies of 200 and 400 MeV were used, and several targets were considered for each nuclide.

  13. Laser Plasma Accelerators

    NASA Astrophysics Data System (ADS)

    Malka, Victor

    The continuing development of powerful laser systems has permitted to extend the interaction of laser beams with matter far into the relativistic domain, and to demonstrate new approaches for producing energetic particle beams. The extremely large electric fields, with amplitudes exceeding the TV/m level, that are produced in plasma medium are of relevance particle acceleration. Since the value of this longitudinal electric field, 10,000 times larger than those produced in conventional radio-frequency cavities, plasma accelerators appear to be very promising for the development of compact accelerators. The incredible progresses in the understanding of laser plasma interaction physic, allows an excellent control of electron injection and acceleration. Thanks to these recent achievements, laser plasma accelerators deliver today high quality beams of energetic radiation and particles. These beams have a number of interesting properties such as shortness, brightness and spatial quality, and could lend themselves to applications in many fields, including medicine, radio-biology, chemistry, physics and material science,security (material inspection), and of course in accelerator science.

  14. Physics.

    PubMed

    Bromley, D A

    1980-07-01

    From massive quarks deep in the hearts of atomic nuclei to the catastrophic collapse of giant stars in the farthest reaches of the universe, from the partial realization of Einstein's dream of a unified theory of the forces of nature to the most practical applications in technology, medicine, and throughout contemporary society, physics continues to have a profound impact on man's view of the universe and on the quality of life. The author argues that the past few years, in terms of new discoveries, new insight-and the new questions-have been among the most productive in the history of the field and puts into context his selection of some of the most important new developments in this fundamental science.

  15. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  16. Wake field acceleration experiments

    SciTech Connect

    Simpson, J.D.

    1988-01-01

    Where and how will wake field acceleration devices find use for other than, possibly, accelerators for high energy physics. I don't know that this can be responsibly answered at this time. What I can do is describe some recent results from an ongoing experimental program at Argonne which support the idea that wake field techniques and devices are potentially important for future accelerators. Perhaps this will spawn expanded interest and even new ideas for the use of this new technology. The Argonne program, and in particular the Advanced Accelerator Test Facility (AATF), has been reported in several fairly recent papers and reports. But because this is a substantially new audience for the subject, I will include a brief review of the program and the facility before describing experiments. 10 refs., 7 figs.

  17. Doubled Color Codes

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey

    Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.

  18. Current trends in non-accelerator particle physics: 1, Neutrino mass and oscillation. 2, High energy neutrino astrophysics. 3, Detection of dark matter. 4, Search for strange quark matter. 5, Magnetic monopole searches

    SciTech Connect

    He, Yudong |

    1995-07-01

    This report is a compilation of papers reflecting current trends in non-accelerator particle physics, corresponding to talks that its author was invited to present at the Workshop on Tibet Cosmic Ray Experiment and Related Physics Topics held in Beijing, China, April 4--13, 1995. The papers are entitled `Neutrino Mass and Oscillation`, `High Energy Neutrino Astrophysics`, `Detection of Dark Matter`, `Search for Strange Quark Matter`, and `Magnetic Monopole Searches`. The report is introduced by a survey of the field and a brief description of each of the author`s papers.

  19. Symposium report on frontier applications of accelerators

    SciTech Connect

    Parsa, Z.

    1993-09-28

    This report contains viewgraph material on the following topics: Electron-Positron Linear Colliders; Unconventional Colliders; Prospects for UVFEL; Accelerator Based Intense Spallation; Neutron Sources; and B Physics at Hadron Accelerators with RHIC as an Example.

  20. Plasma Wakefield Acceleration and FACET - Facilities for Accelerator Science and Experimental Test Beams at SLAC

    ScienceCinema

    Andrei Seryi

    2016-07-12

    Plasma wakefield acceleration is one of the most promising approaches to advancing accelerator technology. This approach offers a potential 1,000-fold or more increase in acceleration over a given distance, compared to existing accelerators.  FACET, enabled by the Recovery Act funds, will study plasma acceleration, using short, intense pulses of electrons and positrons. In this lecture, the physics of plasma acceleration and features of FACET will be presented.  

  1. The neutrino electron accelerator

    SciTech Connect

    Shukla, P.K.; Stenflo, L.; Bingham, R.; Bethe, H.A.; Dawson, J.M.; Mendonca, J.T.

    1998-01-01

    It is shown that a wake of electron plasma oscillations can be created by the nonlinear ponderomotive force of an intense neutrino flux. The electrons trapped in the plasma wakefield will be accelerated to high energies. Such processes may be important in supernovas and pulsars. {copyright} {ital 1998 American Institute of Physics.}

  2. Energy saver prototype accelerating resonator

    SciTech Connect

    Kerns, Q.; May, M.; Miller, H.W.; Reid, J.; Turkot, F.; Webber, R.; Wildman, D.

    1981-06-01

    A fixed frequency rf accelerating resonator has been built and tested for the Fermilab Energy Saver. The design parameters and prototype resonator test results are given. The resonator features a high permeability nickel alloy resistor which damps unwanted modes and corona rolls designed with the aid of the computer code SUPERFISH. In bench measurements, the prototype resonator has achieved peak accelerating voltages of 500 kV for a 1% duty cycle and cw operation at 360 kV. 4 refs.

  3. The Brookhaven National Laboratory Accelerator Test Facility

    SciTech Connect

    Batchelor, K.

    1992-09-01

    The Brookhaven National Laboratory Accelerator Test Facility comprises a 50 MeV traveling wave electron linear accelerator utilizing a high gradient, photo-excited, raidofrequency electron gun as an injector and an experimental area for study of new acceleration methods or advanced radiation sources using free electron lasers. Early operation of the linear accelerator system including calculated and measured beam parameters are presented together with the experimental program for accelerator physics and free electron laser studies.

  4. The Brookhaven National Laboratory Accelerator Test Facility

    SciTech Connect

    Batchelor, K.

    1992-01-01

    The Brookhaven National Laboratory Accelerator Test Facility comprises a 50 MeV traveling wave electron linear accelerator utilizing a high gradient, photo-excited, raidofrequency electron gun as an injector and an experimental area for study of new acceleration methods or advanced radiation sources using free electron lasers. Early operation of the linear accelerator system including calculated and measured beam parameters are presented together with the experimental program for accelerator physics and free electron laser studies.

  5. Introductory Physics Experiments Using the Wiimote

    NASA Astrophysics Data System (ADS)

    Somers, William; Rooney, Frank; Ochoa, Romulo

    2009-03-01

    The Wii, a video game console, is a very popular device with millions of units sold worldwide over the past two years. Although computationally it is not a powerful machine, to a physics educator its most important components can be its controllers. The Wiimote (or remote) controller contains three accelerometers, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, any PC with Bluetooth capability can detect the information sent out by the Wiimote. We have designed several experiments for introductory physics courses that make use of the accelerometers and Bluetooth connectivity. We have adapted the Wiimote to measure the: variable acceleration in simple harmonic motion, centripetal and tangential accelerations in circular motion, and the accelerations generated when students lift weights. We present the results of our experiments and compare them with those obtained when using motion and/or force sensors.

  6. Assess the key physics that underpins high-hydro coupling-efficiency in NDCX-II experiments and high-gain heavy ion direct drive target designs using proven hydro codes like HYDRA

    SciTech Connect

    Barnard, J. J.; Hay, M. J.; Logan, B. G.; Ng, S. F.; Perkins, L. J.; Veitzer, S.; Yu, S. S.

    2010-07-01

    also used for ID (planar) and 2D (r,z) simulations of potential experiments. We have also explored whether similar physics could be studied using an energy ramp (i.e., a velocity tilt) rather than two separate pulses. We have shown that an optimum occurs in the macropulse duration (with fixed velocity tilt) that maximizes the shock strength. In the area of IFE target design we have continued to explore direct drive targets composed of deuterium-tritium fuel and ablator layers. We have extended our previous target designs at 0.44 MJ drive energy, gain 50, (50 MeV foot, 500 MeV main pulse, Rb ion, which requires a large number of beams due to a high beam space charge constraint) to a power plant scale 3.7 MJ drive energy, gain {approx}150 (220 MeV foot, 2.2 GeV main pulse, Hg ion) that eases requirements on the accelerator. We have studied the effects of two important design choices on ICF target performance. We have shown that increasing the number of foot pulses may reduce the target's in-flight adiabat and consequently improve its compressibility and fusion yield. As in the case of laser drive, the first three shocks are the most important to the target's performance, with additional shocks contributing only marginally to compression and burn. We have also demonstrated that ion range lengthening during the main pulse can further reduce the target adiabat and improve the efficiency with which beam energy is coupled into the target. (Ion range lengthening using two different kinetic energies for the foot and main pulse has previously proven effective in the design of high gain targets).

  7. New directions in linear accelerators

    SciTech Connect

    Jameson, R.A.

    1984-01-01

    Current work on linear particle accelerators is placed in historical and physics contexts, and applications driving the state of the art are discussed. Future needs and the ways they may force development are outlined in terms of exciting R and D challenges presented to today's accelerator designers. 23 references, 7 figures.

  8. Proceedings of the 1987 IEEE particle accelerator conference: Volume 2

    SciTech Connect

    Lindstrom, E.R.; Taylor, L.S.

    1987-01-01

    This report contains papers from the IEEE particle accelerator conference. This second volume of three covers the following main topics: Instrumentation and control, accelerators for medium energies and nuclear physics, high current accelerators, and beam dynamics. (LSP)

  9. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  10. Accelerated testing of space batteries

    NASA Technical Reports Server (NTRS)

    Mccallum, J.; Thomas, R. E.; Waite, J. H.

    1973-01-01

    An accelerated life test program for space batteries is presented that fully satisfies empirical, statistical, and physical criteria for validity. The program includes thermal and other nonmechanical stress analyses as well as mechanical stress, strain, and rate of strain measurements.

  11. Wakefield accelerators

    SciTech Connect

    Simpson, J.D.

    1990-01-01

    The search for new methods to accelerate particle beams to high energy using high gradients has resulted in a number of candidate schemes. One of these, wakefield acceleration, has been the subject of considerable R D in recent years. This effort has resulted in successful proof of principle experiments and in increased understanding of many of the practical aspects of the technique. Some wakefield basics plus the status of existing and proposed experimental work is discussed, along with speculations on the future of wake field acceleration. 10 refs., 6 figs.

  12. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  13. ION ACCELERATOR

    DOEpatents

    Bell, J.S.

    1959-09-15

    An arrangement for the drift tubes in a linear accelerator is described whereby each drift tube acts to shield the particles from the influence of the accelerating field and focuses the particles passing through the tube. In one embodiment the drift tube is splii longitudinally into quadrants supported along the axis of the accelerator by webs from a yoke, the quadrants. webs, and yoke being of magnetic material. A magnetic focusing action is produced by energizing a winding on each web to set up a magnetic field between adjacent quadrants. In the other embodiment the quadrants are electrically insulated from each other and have opposite polarity voltages on adjacent quadrants to provide an electric focusing fleld for the particles, with the quadrants spaced sufficienily close enough to shield the particles within the tube from the accelerating electric field.

  14. Acceleration switch

    DOEpatents

    Abbin, J.P. Jr.; Devaney, H.F.; Hake, L.W.

    1979-08-29

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  15. Acceleration switch

    DOEpatents

    Abbin, Jr., Joseph P.; Devaney, Howard F.; Hake, Lewis W.

    1982-08-17

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  16. Accelerators for America's Future

    NASA Astrophysics Data System (ADS)

    Bai, Mei

    2016-03-01

    Particle accelerator, a powerful tool to energize beams of charged particles to a desired speed and energy, has been the working horse for investigating the fundamental structure of matter and fundermental laws of nature. Most known examples are the 2-mile long Stanford Linear Accelerator at SLAC, the high energy proton and anti-proton collider Tevatron at FermiLab, and Large Hadron Collider that is currently under operation at CERN. During the less than a century development of accelerator science and technology that led to a dazzling list of discoveries, particle accelerators have also found various applications beyond particle and nuclear physics research, and become an indispensible part of the economy. Today, one can find a particle accelerator at almost every corner of our lives, ranging from the x-ray machine at the airport security to radiation diagnostic and therapy in hospitals. This presentation will give a brief introduction of the applications of this powerful tool in fundermental research as well as in industry. Challenges in accelerator science and technology will also be briefly presented

  17. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  18. High field gradient particle accelerator

    DOEpatents

    Nation, J.A.; Greenwald, S.

    1989-05-30

    A high electric field gradient electron accelerator utilizing short duration, microwave radiation, and capable of operating at high field gradients for high energy physics applications or at reduced electric field gradients for high average current intermediate energy accelerator applications is disclosed. Particles are accelerated in a smooth bore, periodic undulating waveguide, wherein the period is so selected that the particles slip an integral number of cycles of the r.f. wave every period of the structure. This phase step of the particles produces substantially continuous acceleration in a traveling wave without transverse magnetic or other guide means for the particle. 10 figs.

  19. High field gradient particle accelerator

    DOEpatents

    Nation, John A.; Greenwald, Shlomo

    1989-01-01

    A high electric field gradient electron accelerator utilizing short duration, microwave radiation, and capable of operating at high field gradients for high energy physics applications or at reduced electric field gradients for high average current intermediate energy accelerator applications. Particles are accelerated in a smooth bore, periodic undulating waveguide, wherein the period is so selected that the particles slip an integral number of cycles of the r.f. wave every period of the structure. This phase step of the particles produces substantially continuous acceleration in a traveling wave without transverse magnetic or other guide means for the particle.

  20. Microwave inverse Cerenkov accelerator

    NASA Astrophysics Data System (ADS)

    Zhang, T. B.; Marshall, T. C.; LaPointe, M. A.; Hirshfield, J. L.

    1997-03-01

    A Microwave Inverse Cerenkov Accelerator (MICA) is currently under construction at the Yale Beam Physics Laboratory. The accelerating structure in MICA consists of an axisymmetric dielectrically lined waveguide. For the injection of 6 MeV microbunches from a 2.856 GHz RF gun, and subsequent acceleration by the TM01 fields, particle simulation studies predict that an acceleration gradient of 6.3 MV/m can be achieved with a traveling-wave power of 15 MW applied to the structure. Synchronous injection into a narrow phase window is shown to allow trapping of all injected particles. The RF fields of the accelerating structure are shown to provide radial focusing, so that longitudinal and transverse emittance growth during acceleration is small, and that no external magnetic fields are required for focusing. For 0.16 nC, 5 psec microbunches, the normalized emittance of the accelerated beam is predicted to be less than 5πmm-mrad. Experiments on sample alumina tubes have been conducted that verify the theoretical dispersion relation for the TM01 mode over a two-to-one range in frequency. No excitation of axisymmetric or non-axisymmetric competing waveguide modes was observed. High power tests showed that tangential electric fields at the inner surface of an uncoated sample of alumina pipe could be sustained up to at least 8.4 MV/m without breakdown. These considerations suggest that a MICA test accelerator can be built to examine these predictions using an available RF power source, 6 MeV RF gun and associated beam line.

  1. Numerical Verification of the Power Transfer and Wakefield Coupling in the Clic Two-Beam Accelerator

    SciTech Connect

    Candel, Arno; Li, Z.; Ng, C.; Rawat, V.; Schussman, G.; Ko, K.; Syratchev, I.; Grudiev, A.; Wuensch, W.; /CERN

    2011-08-19

    The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its two-beam accelerator (TBA) concept envisions complex 3D structures, which must be modeled to high accuracy so that simulation results can be directly used to prepare CAD drawings for machining. The required simulations include not only the fundamental mode properties of the accelerating structures but also the Power Extraction and Transfer Structure (PETS), as well as the coupling between the two systems. Time-domain simulations will be performed to understand pulse formation, wakefield damping, fundamental power transfer and wakefield coupling in these structures. Applying SLAC's parallel finite element code suite, these large-scale problems will be solved on some of the largest supercomputers available. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel two-beam accelerator scheme.

  2. Some introductory formalizations on the affine Hilbert spaces model of the origin of life. I. On quantum mechanical measurement and the origin of the genetic code: a general physical framework theory.

    PubMed

    Balázs, András

    2006-08-01

    A physical (affine Hilbert spaces) frame is developed for the discussion of the interdependence of the problem of the origin (symbolic assignment) of the genetic code and a possible endophysical (a kind of "internal") quantum measurement in an explicite way, following the general considerations of Balázs (Balázs, A., 2003. BioSystems 70, 43-54; Balázs, A., 2004a. BioSystems 73, 1-11). Using the Everett (a dynamic) interpretation of quantum mechanics, both the individual code assignment and the concatenated linear symbolism is discussed. It is concluded that there arises a skewed quantal probability field, with a natural dynamic non-linearity in codon assignment within the physical model adopted (essentially corresponding to a much discussed biochemical frame of self-catalyzed binding (charging) of t RNA like proto RNAs (ribozymes) with amino acids). This dynamic specific molecular complex assumption of individual code assignment, and the divergence of the code in relation to symbol concatenation, are discussed: our frame supports the former and interpret the latter as single-type codon (triplet), also unambiguous and extended assignment, selection in molecular evolution, corresponding to converging towards the fixedpoint of the internal dynamics of measurement, either in a protein- or RNA-world. In this respect, the general physical consequence is the introduction of a fourth rank semidiagonal energy tensor (see also Part II) ruling the internal dynamics as a non-linear in principle second-order one. It is inferred, as a summary, that if the problem under discussion could be expressed by the concepts of the Copenhagen interpretation of quantum mechanics in some yet not quite specified way, the matter would be particularly interesting with respect to both the origin of life and quantum mechanics, as a dynamically supported natural measurement-theoretical split between matter ("hardware") and (internal) symbolism ("software") aspects of living matter.

  3. Some introductory formalizations on the affine Hilbert spaces model of the origin of life. I. On quantum mechanical measurement and the origin of the genetic code: a general physical framework theory.

    PubMed

    Balázs, András

    2006-08-01

    A physical (affine Hilbert spaces) frame is developed for the discussion of the interdependence of the problem of the origin (symbolic assignment) of the genetic code and a possible endophysical (a kind of "internal") quantum measurement in an explicite way, following the general considerations of Balázs (Balázs, A., 2003. BioSystems 70, 43-54; Balázs, A., 2004a. BioSystems 73, 1-11). Using the Everett (a dynamic) interpretation of quantum mechanics, both the individual code assignment and the concatenated linear symbolism is discussed. It is concluded that there arises a skewed quantal probability field, with a natural dynamic non-linearity in codon assignment within the physical model adopted (essentially corresponding to a much discussed biochemical frame of self-catalyzed binding (charging) of t RNA like proto RNAs (ribozymes) with amino acids). This dynamic specific molecular complex assumption of individual code assignment, and the divergence of the code in relation to symbol concatenation, are discussed: our frame supports the former and interpret the latter as single-type codon (triplet), also unambiguous and extended assignment, selection in molecular evolution, corresponding to converging towards the fixedpoint of the internal dynamics of measurement, either in a protein- or RNA-world. In this respect, the general physical consequence is the introduction of a fourth rank semidiagonal energy tensor (see also Part II) ruling the internal dynamics as a non-linear in principle second-order one. It is inferred, as a summary, that if the problem under discussion could be expressed by the concepts of the Copenhagen interpretation of quantum mechanics in some yet not quite specified way, the matter would be particularly interesting with respect to both the origin of life and quantum mechanics, as a dynamically supported natural measurement-theoretical split between matter ("hardware") and (internal) symbolism ("software") aspects of living matter. PMID

  4. Acceleration schedules for a recirculating heavy-ion accelerator

    SciTech Connect

    Sharp, W.M.; Grote, D.P.

    2002-05-01

    Recent advances in solid-state switches have made it feasible to design programmable, high-repetition-rate pulsers for induction accelerators. These switches could lower the cost of recirculating induction accelerators, such as the ''small recirculator'' at Lawrence Livermore National Laboratory (LLNL), by substantially reducing the number of induction modules. Numerical work is reported here to determine what effects the use of fewer pulsers at higher voltage would have on the beam quality of the LLNL small recirculator. Lattices with different numbers of pulsers are examined using the fluid/envelope code CIRCE, and several schedules for acceleration and compression are compared for each configuration. For selected schedules, the phase-space dynamics is also studied using the particle-in-cell code WARP3d.

  5. The Accelerator Markup Language and the Universal Accelerator Parser

    SciTech Connect

    Sagan, D.; Forster, M.; Bates, D.A.; Wolski, A.; Schmidt, F.; Walker, N.J.; Larrieu, T.; Roblin, Y.; Pelaia, T.; Tenenbaum, P.; Woodley, M.; Reiche, S.; /UCLA

    2006-10-06

    A major obstacle to collaboration on accelerator projects has been the sharing of lattice description files between modeling codes. To address this problem, a lattice description format called Accelerator Markup Language (AML) has been created. AML is based upon the standard eXtensible Markup Language (XML) format; this provides the flexibility for AML to be easily extended to satisfy changing requirements. In conjunction with AML, a software library, called the Universal Accelerator Parser (UAP), is being developed to speed the integration of AML into any program. The UAP is structured to make it relatively straightforward (by giving appropriate specifications) to read and write lattice files in any format. This will allow programs that use the UAP code to read a variety of different file formats. Additionally, this will greatly simplify conversion of files from one format to another. Currently, besides AML, the UAP supports the MAD lattice format.

  6. Magnetically accelerated foils for shock wave experiments

    NASA Astrophysics Data System (ADS)

    Neff, S.; Ford, J.; Wright, S.; Martinez, D.; Plechaty, C.; Presura, R.

    2009-08-01

    Many astrophysical phenomena involve the interaction of a shock wave with an inhomogeneous background medium. Using scaled experiments with inhomogeneous foam targets makes it possible to study relevant physics in the laboratory to better understand the mechanisms of shock compression and to benchmark astrophysical simulation codes. First experiments on Zebra at the Nevada Terawatt Facility (NTF) have demonstrated flyer acceleration to sufficiently high velocities (up to 5 km/s) and that laser shadowgraphy can image sound fronts in transparent targets. Based on this, we designed an optimized setup to improve the flyer parameters (higher speed and mass) to create shock waves in transparent media. Once x-ray backlighting with the Leopard laser at NTF is operational, we will switch to foam targets with parameters relevant for laboratory astrophysics.

  7. Beamlets from stochastic acceleration.

    PubMed

    Perri, Silvia; Carbone, Vincenzo

    2008-09-01

    We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.

  8. Hardware Accelerated Simulated Radiography

    SciTech Connect

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-04-12

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists.

  9. Elementary particle physics

    NASA Technical Reports Server (NTRS)

    Perkins, D. H.

    1986-01-01

    Elementary particle physics is discussed. Status of the Standard Model of electroweak and strong interactions; phenomena beyond the Standard Model; new accelerator projects; and possible contributions from non-accelerator experiments are examined.

  10. Pulsed Plasma Accelerator Modeling

    NASA Technical Reports Server (NTRS)

    Goodman, M.; Kazeminezhad, F.; Owens, T.

    2009-01-01

    This report presents the main results of the modeling task of the PPA project. The objective of this task is to make major progress towards developing a new computational tool with new capabilities for simulating cylindrically symmetric 2.5 dimensional (2.5 D) PPA's. This tool may be used for designing, optimizing, and understanding the operation of PPA s and other pulsed power devices. The foundation for this task is the 2-D, cylindrically symmetric, magnetohydrodynamic (MHD) code PCAPPS (Princeton Code for Advanced Plasma Propulsion Simulation). PCAPPS was originally developed by Sankaran (2001, 2005) to model Lithium Lorentz Force Accelerators (LLFA's), which are electrode based devices, and are typically operated in continuous magnetic field to the model, and implementing a first principles, self-consistent algorithm to couple the plasma and power circuit that drives the plasma dynamics.

  11. Figuring the Acceleration of the Simple Pendulum

    ERIC Educational Resources Information Center

    Lieberherr, Martin

    2011-01-01

    The centripetal acceleration has been known since Huygens' (1659) and Newton's (1684) time. The physics to calculate the acceleration of a simple pendulum has been around for more than 300 years, and a fairly complete treatise has been given by C. Schwarz in this journal. But sentences like "the acceleration is always directed towards the…

  12. W.K.H. Panofsky Prize in Experimental Particle Physics: The design, construction and performance of the B Factory accelerator facilities, PEP-II and KEKB

    NASA Astrophysics Data System (ADS)

    Dorfan, Jonathan

    2016-03-01

    The discovery and elucidation of CP violation in the B-meson system presented daunting challenges for the accelerator and detector facilities. This talk discusses how these challenges were met and overcome in the electron-positron colliding-beam accelerator facilities PEP-II (at SLAC) and KEKB (at KEK). The key challenge was to produce unprecedentedly large numbers of B-mesons in a geometry that provided high-statistics, low-background samples of decays to CP eigenstates. This was realized with asymmetric collisions at the Γ(4S) at peak luminosities in excess of 3 ×1033 /sq. cm/sec. Specialized optics were developed to generate efficient, low background, multi-bunch collisions in an energy-asymmetric collision geometry. Novel technologies for the RF, vacuum and feedback systems permitted the storage of multi-amp, multi-bunch beams of electrons and positrons, thereby generating high peak luminosities. Accelerator uptimes greater than 95 percent, combined with high-intensity injection systems, ensured large integrated luminosity. Both facilities rapidly attained their design specifications and ultimately far exceeded the projected performance expectations for both peak and integrated luminosity.

  13. Accelerators for Intensity Frontier Research

    SciTech Connect

    Derwent, Paul; /Fermilab

    2012-05-11

    In 2008, the Particle Physics Project Prioritization Panel identified three frontiers for research in high energy physics, the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. In this paper, I will describe how Fermilab is configuring and upgrading the accelerator complex, prior to the development of Project X, in support of the Intensity Frontier.

  14. Particle acceleration

    NASA Technical Reports Server (NTRS)

    Vlahos, L.; Machado, M. E.; Ramaty, R.; Murphy, R. J.; Alissandrakis, C.; Bai, T.; Batchelor, D.; Benz, A. O.; Chupp, E.; Ellison, D.

    1986-01-01

    Data is compiled from Solar Maximum Mission and Hinothori satellites, particle detectors in several satellites, ground based instruments, and balloon flights in order to answer fundamental questions relating to: (1) the requirements for the coronal magnetic field structure in the vicinity of the energization source; (2) the height (above the photosphere) of the energization source; (3) the time of energization; (4) transistion between coronal heating and flares; (5) evidence for purely thermal, purely nonthermal and hybrid type flares; (6) the time characteristics of the energization source; (7) whether every flare accelerates protons; (8) the location of the interaction site of the ions and relativistic electrons; (9) the energy spectra for ions and relativistic electrons; (10) the relationship between particles at the Sun and interplanetary space; (11) evidence for more than one acceleration mechanism; (12) whether there is single mechanism that will accelerate particles to all energies and also heat the plasma; and (13) how fast the existing mechanisms accelerate electrons up to several MeV and ions to 1 GeV.

  15. Accelerated Achievement

    ERIC Educational Resources Information Center

    Ford, William J.

    2010-01-01

    This article focuses on the accelerated associate degree program at Ivy Tech Community College (Indiana) in which low-income students will receive an associate degree in one year. The three-year pilot program is funded by a $2.3 million grant from the Lumina Foundation for Education in Indianapolis and a $270,000 grant from the Indiana Commission…

  16. ACCELERATION INTEGRATOR

    DOEpatents

    Pope, K.E.

    1958-01-01

    This patent relates to an improved acceleration integrator and more particularly to apparatus of this nature which is gyrostabilized. The device may be used to sense the attainment by an airborne vehicle of a predetermined velocitv or distance along a given vector path. In its broad aspects, the acceleration integrator utilizes a magnetized element rotatable driven by a synchronous motor and having a cylin drical flux gap and a restrained eddy- current drag cap deposed to move into the gap. The angular velocity imparted to the rotatable cap shaft is transmitted in a positive manner to the magnetized element through a servo feedback loop. The resultant angular velocity of tae cap is proportional to the acceleration of the housing in this manner and means may be used to measure the velocity and operate switches at a pre-set magnitude. To make the above-described dcvice sensitive to acceleration in only one direction the magnetized element forms the spinning inertia element of a free gyroscope, and the outer housing functions as a gimbal of a gyroscope.

  17. Plasma accelerator

    DOEpatents

    Wang, Zhehui; Barnes, Cris W.

    2002-01-01

    There has been invented an apparatus for acceleration of a plasma having coaxially positioned, constant diameter, cylindrical electrodes which are modified to converge (for a positive polarity inner electrode and a negatively charged outer electrode) at the plasma output end of the annulus between the electrodes to achieve improved particle flux per unit of power.

  18. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    NASA Astrophysics Data System (ADS)

    Hamlin, Nathaniel D.; Seyler, Charles E.

    2014-12-01

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm's law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  19. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    SciTech Connect

    Hamlin, Nathaniel D.; Seyler, Charles E.

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  20. EM Structure Based and Vacuum Acceleration

    SciTech Connect

    Colby, E.R.; /SLAC

    2005-09-27

    The importance of particle acceleration may be judged from the number of applications which require some sort of accelerated beam. In addition to accelerator-based high energy physics research, non-academic applications include medical imaging and treatment, structural biology by x-ray diffraction, pulse radiography, cargo inspection, material processing, food and medical instrument sterilization, and so on. Many of these applications are already well served by existing technologies and will profit only marginally from developments in accelerator technology. Other applications are poorly served, such as structural biology, which is conducted at synchrotron radiation facilities, and medical treatment using proton accelerators, the machines for which are rare because they are complex and costly. Developments in very compact, high brightness and high gradient accelerators will change how accelerators are used for such applications, and potentially enable new ones. Physical and technical issues governing structure-based and vacuum acceleration of charged particles are reviewed, with emphasis on practical aspects.

  1. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  2. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: The structural analysis of protein sequences based on the quasi-amino acids code

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Tang, Xu-Qing; Xu, Zhen-Yuan

    2009-01-01

    Proteomics is the study of proteins and their interactions in a cell. With the successful completion of the Human Genome Project, it comes the postgenome era when the proteomics technology is emerging. This paper studies protein molecule from the algebraic point of view. The algebraic system (Σ, +, *) is introduced, where Σ is the set of 64 codons. According to the characteristics of (Σ, +, *), a novel quasi-amino acids code classification method is introduced and the corresponding algebraic operation table over the set ZU of the 16 kinds of quasi-amino acids is established. The internal relation is revealed about quasi-amino acids. The results show that there exist some very close correlations between the properties of the quasi-amino acids and the codon. All these correlation relationships may play an important part in establishing the logic relationship between codons and the quasi-amino acids during the course of life origination. According to Ma F et al (2003 J. Anhui Agricultural University 30 439), the corresponding relation and the excellent properties about amino acids code are very difficult to observe. The present paper shows that (ZU, ⊕, otimes) is a field. Furthermore, the operational results display that the codon tga has different property from other stop codons. In fact, in the mitochondrion from human and ox genomic codon, tga is just tryptophane, is not the stop codon like in other genetic code, it is the case of the Chen W C et al (2002 Acta Biophysica Sinica 18(1) 87). The present theory avoids some inexplicable events of the 20 kinds of amino acids code, in other words it solves the problem of 'the 64 codon assignments of mRNA to amino acids is probably completely wrong' proposed by Yang (2006 Progress in Modern Biomedicine 6 3).

  3. Hypoxia in the St. Lawrence Estuary: How a Coding Error Led to the Belief that “Physics Controls Spatial Patterns”

    PubMed Central

    2015-01-01

    Two fundamental sign errors were found in a computer code used for studying the oxygen minimum zone (OMZ) and hypoxia in the Estuary and Gulf of St. Lawrence. These errors invalidate the conclusions drawn from the model, and call into question a proposed mechanism for generating OMZ that challenges classical understanding. The study in question is being cited frequently, leading the discipline in the wrong direction. PMID:26397371

  4. Overview of CSR codes

    NASA Astrophysics Data System (ADS)

    Bassi, G.; Agoh, T.; Dohlus, M.; Giannessi, L.; Hajima, R.; Kabel, A.; Limberg, T.; Quattromini, M.

    2006-02-01

    Coherent synchrotron radiation (CSR) effects play an important role in accelerator physics. CSR effects can be negative (emittance growth in bunch compressors and microbunching instability) or positive (production of CSR in a controlled way). Moreover, CSR is of interest in other fields such as astrophysics. Only a few simple models have been solved analytically. This motivates the development of numerical procedures. In this review article we overview different numerical methods to study CSR effects.

  5. Measurement of Coriolis Acceleration with a Smartphone

    ERIC Educational Resources Information Center

    Shaku, Asif; Kraft, Jakob

    2016-01-01

    Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern…

  6. Centripetal Acceleration: Often Forgotten or Misinterpreted

    ERIC Educational Resources Information Center

    Singh, Chandralekha

    2009-01-01

    Acceleration is a fundamental concept in physics which is taught in mechanics at all levels. Here, we discuss some challenges in teaching this concept effectively when the path along which the object is moving has a curvature and centripetal acceleration is present. We discuss examples illustrating that both physics teachers and students have…

  7. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lee, K.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L. S.; Ranft, J.; Roesler, S.; Sala, P. R.; Wilson, T. L.; Townsend, L. W. (Principal Investigator)

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  8. Transient simulation of ram accelerator flowfields

    NASA Astrophysics Data System (ADS)

    Drabczuk, Randall P.; Rolader, G.; Dash, S.; Sinha, N.; York, B.

    1993-01-01

    This paper describes the development of an advanced computational fluid dynamic (CFD) simulation capability in support of the USAF Armament Directorate ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, and high pressure thermo-chemistry. Selected ram accelerator simulations are presented that serve to exhibit the CRAFT code capabilities and identify some of the principle research/design Issues.

  9. Transient simulation of ram accelerator flowfields

    NASA Astrophysics Data System (ADS)

    Sinha, N.; York, B. J.; Dash, S. M.; Drabczuk, R.; Rolader, G. E.

    1992-10-01

    This paper describes the development of an advanced computational fluid dynamic (CFD) simulation capability in support of the U.S. Air Force Armament Directorate's ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, high pressure thermochemistry, etc. Selected ram accelerator simulations are presented which serve to exhibit the CRAFT code's capabilities and identify some of the principal research/design issues.

  10. Compact accelerator

    DOEpatents

    Caporaso, George J.; Sampayan, Stephen E.; Kirbie, Hugh C.

    2007-02-06

    A compact linear accelerator having at least one strip-shaped Blumlein module which guides a propagating wavefront between first and second ends and controls the output pulse at the second end. Each Blumlein module has first, second, and third planar conductor strips, with a first dielectric strip between the first and second conductor strips, and a second dielectric strip between the second and third conductor strips. Additionally, the compact linear accelerator includes a high voltage power supply connected to charge the second conductor strip to a high potential, and a switch for switching the high potential in the second conductor strip to at least one of the first and third conductor strips so as to initiate a propagating reverse polarity wavefront(s) in the corresponding dielectric strip(s).

  11. FAA Smoke Transport Code

    2006-10-27

    FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a codemore » obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.« less

  12. High Energy Particle Transport Code System.

    2003-12-17

    Version 00 NMTC/JAM is an upgraded version of the code CCC-694/NMTC-JAERI97, which was developed in 1982 at JAERI and is based on the CCC-161/NMTC code system. NMTC/JAM simulates high energy nuclear reactions and nuclear meson transport processes. The applicable energy range of NMTC/JAM was extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code Jet-Aa Microscopic (JAM) for the intra-nuclear cascade part. For the evaporation andmore » fission process, a new model, GEM, can be used to describe the light nucleus production from the excited residual nucleus. According to the extension of the applicable energy, the nucleon-nucleus non-elastic, elastic and differential elastic cross section data were upgraded. In addition, the particle transport in a magnetic field was implemented for beam transport calculations. Some new tally functions were added, and the format of input and output of data is more user friendly. These new calculation functions and utilities provide a tool to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than with the previous model. It implements an intranuclear cascade model taking account of the in-medium nuclear effects and the preequilibrium calculation model based on the exciton one. For treating the nucleon transport process, the nucleon-nucleus cross sections are revised to those derived by the systematics of Pearlstein. Moreover, the level density parameter derived by Ignatyuk is included as a new option for particle evaporation calculation. A geometry package based on the Combinatorial Geometry with multi-array system and the importance sampling technique is implemented in the code. Tally function is also employed for obtaining such physical quantities as neutron energy spectra, heat deposition and nuclide yield without editing a history file. The code can simulate both the primary spallation reaction and the

  13. BICEP's acceleration

    SciTech Connect

    Contaldi, Carlo R.

    2014-10-01

    The recent Bicep2 [1] detection of, what is claimed to be primordial B-modes, opens up the possibility of constraining not only the energy scale of inflation but also the detailed acceleration history that occurred during inflation. In turn this can be used to determine the shape of the inflaton potential V(φ) for the first time — if a single, scalar inflaton is assumed to be driving the acceleration. We carry out a Monte Carlo exploration of inflationary trajectories given the current data. Using this method we obtain a posterior distribution of possible acceleration profiles ε(N) as a function of e-fold N and derived posterior distributions of the primordial power spectrum P(k) and potential V(φ). We find that the Bicep2 result, in combination with Planck measurements of total intensity Cosmic Microwave Background (CMB) anisotropies, induces a significant feature in the scalar primordial spectrum at scales k∼ 10{sup -3} Mpc {sup -1}. This is in agreement with a previous detection of a suppression in the scalar power [2].

  14. Beam acceleration through proton radio frequency quadrupole accelerator in BARC

    NASA Astrophysics Data System (ADS)

    Bhagwat, P. V.; Krishnagopal, S.; Mathew, J. V.; Singh, S. K.; Jain, P.; Rao, S. V. L. S.; Pande, M.; Kumar, R.; Roychowdhury, P.; Kelwani, H.; Rama Rao, B. V.; Gupta, S. K.; Agarwal, A.; Kukreti, B. M.; Singh, P.

    2016-05-01

    A 3 MeV proton Radio Frequency Quadrupole (RFQ) accelerator has been designed at the Bhabha Atomic Research Centre, Mumbai, India, for the Low Energy High Intensity Proton Accelerator (LEHIPA) programme. The 352 MHz RFQ is built in 4 segments and in the first phase two segments of the LEHIPA RFQ were commissioned, accelerating a 50 keV, 1 mA pulsed proton beam from the ion source, to an energy of 1.24 MeV. The successful operation of the RFQ gave confidence in the physics understanding and technology development that have been achieved, and indicate that the road forward can now be traversed rather more quickly.

  15. Maximal acceleration and radiative processes

    NASA Astrophysics Data System (ADS)

    Papini, Giorgio

    2015-08-01

    We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.

  16. MCNP code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids.

  17. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  18. The NIMROD Code

    NASA Astrophysics Data System (ADS)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  19. LFSC - Linac Feedback Simulation Code

    SciTech Connect

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  20. Recent US target-physics-related research in heavy-ion inertial fusion: simulations for tamped targets and for disk experiments in accelerator test facilities

    SciTech Connect

    Mark, J.W.K.

    1982-03-22

    Calculations suggest that experiments relating to disk heating, as well as beam deposition, focusing and transport can be performed within the context of current design proposals for accelerator test-facilities. Since the test-facilities have lower ion kinetic energy and beam pulse power as compared to reactor drivers, we achieve high-beam intensities at the focal spot by using short focal distance and properly designed beam optics. In this regard, the low beam emittance of suggested multi-beam designs are very useful. Possibly even higher focal spot brightness could be obtained by plasma lenses which involve external fields on the beam which is stripped to a higher charge state by passing through a plasma cell. Preliminary results suggest that intensities approx. 10/sup 13/ - 10/sup 14/ W/cm/sup 2/ are achievable. Given these intensities, deposition experiments with heating of disks to greater than a million degrees Kelvin (100 eV) are expected.

  1. MANTRA: An Integral Reactor Physics Experiment to Infer Actinide Capture Cross-sections from Thorium to Californium with Accelerator Mass Spectrometry

    SciTech Connect

    G. Youinou; C. McGrath; G. Imel; M. Paul; R. Pardo; F. Kondev; M. Salvatores; G. Palmiotti

    2011-08-01

    The principle of the proposed experiment is to irradiate very pure actinide samples in the Advanced Test Reactor at INL and, after a given time, determine the amount of the different transmutation products. The determination of the nuclide densities before and after neutron irradiation will allow inference of effective neutron capture cross-sections. This approach has been used in the past and the novelty of this experiment is that the atom densities of the different transmutation products will be determined using the Accelerator Mass Spectrometry technique at the ATLAS facility located at ANL. It is currently planned to irradiate the following isotopes: 232Th, 235U, 236U, 238U, 237Np, 238Pu, 239Pu, 240Pu, 241Pu, 242Pu, 241Am, 243Am, 244Cm and 248Cm.

  2. Characterization of the Physical Stability of a Lyophilized IgG1 mAb After Accelerated Shipping-like Stress

    PubMed Central

    Telikepalli, Srivalli; Kumru, Ozan S.; Kim, Jae Hyun; Joshi, Sangeeta B.; O'Berry, Kristin B.; Blake-Haskins, Angela W.; Perkins, Melissa D.; Middaugh, C. Russell; Volkin, David B.

    2014-01-01

    Upon exposure to shaking stress, an IgG1 mAb formulation in both liquid and lyophilized state formed subvisible particles. Since freeze-drying is expected to minimize protein physical instability under these conditions, the extent and nature of aggregate formation in the lyophilized preparation was examined using a variety of particle characterization techniques. The effect of formulation variables such as residual moisture content, reconstitution rate, and reconstitution medium were examined. Upon reconstitution of shake-stressed lyophilized mAb, differences in protein particle size and number were observed by Microflow Digital Imaging (MFI), with the reconstitution medium having the largest impact. Shake-stress had minor effects on the structure of protein within the particles as shown by SDS-PAGE and FTIR analysis. The lyophilized mAb was shake-stressed to different extents and stored for 3 months at different temperatures. Both extent of cake collapse and storage temperature affected the physical stability of the shake-stressed lyophilized mAb upon subsequent storage. These findings demonstrate that physical degradation upon shaking of a lyophilized IgG1 mAb formulation includes not only cake breakage, but also results in an increase in subvisible particles and turbidity upon reconstitution. The shaking-induced cake breakage of the lyophilized IgG1 mAb formulation also resulted in decreased physical stability upon storage. PMID:25522000

  3. Magnetically accelerated foils for shock wave experiments

    NASA Astrophysics Data System (ADS)

    Neff, Stephan; Ford, Jessica; Martinez, David; Plechaty, Christopher; Wright, Sandra; Presura, Radu

    2008-04-01

    The interaction of shock waves with inhomogeneous media is important in many astrophysical problems, e.g. the role of shock compression in star formation. Using scaled experiments with inhomogeneous foam targets makes it possible to study relevant physics in the laboratory, to better understand the mechanisms of shock compression and to benchmark astrophysical simulation codes. Experiments with flyer-generated shock waves have been performed on the Z machine in Sandia. The Zebra accelerator at the Nevada Terawatt Facility (NTF) allows for complementary experiments with high repetition rate. First experiments on Zebra demonstrated flyer acceleration to sufficiently high velocities (around 2 km/s) and that laser shadowgraphy can image sound fronts in transparent targets. Based on this, we designed an optimized setup to improve the flyer parameters (higher speed and mass) to create shock waves in transparent media. Once x-ray backlighting with the Leopard laser at NTF is operational, we will switch to foam targets with parameters relevant for laboratory astrophysics.

  4. Toward GPGPU accelerated human electromechanical cardiac simulations

    PubMed Central

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492

  5. Advanced concepts for acceleration

    SciTech Connect

    Keefe, D.

    1986-07-01

    Selected examples of advanced accelerator concepts are reviewed. Such plasma accelerators as plasma beat wave accelerator, plasma wake field accelerator, and plasma grating accelerator are discussed particularly as examples of concepts for accelerating relativistic electrons or positrons. Also covered are the pulsed electron-beam, pulsed laser accelerator, inverse Cherenkov accelerator, inverse free-electron laser, switched radial-line accelerators, and two-beam accelerator. Advanced concepts for ion acceleration discussed include the electron ring accelerator, excitation of waves on intense electron beams, and two-wave combinations. (LEW)

  6. Accelerators and the Accelerator Community

    SciTech Connect

    Malamud, Ernest; Sessler, Andrew

    2008-06-01

    In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.

  7. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    NASA Astrophysics Data System (ADS)

    Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.; Reusch, M. F.

    1995-09-01

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.

  8. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    SciTech Connect

    Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.; Reusch, M. F.

    1995-09-15

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.

  9. OpenMP for Accelerators

    SciTech Connect

    Beyer, J C; Stotzer, E J; Hart, A; de Supinski, B R

    2011-03-15

    OpenMP [13] is the dominant programming model for shared-memory parallelism in C, C++ and Fortran due to its easy-to-use directive-based style, portability and broad support by compiler vendors. Similar characteristics are needed for a programming model for devices such as GPUs and DSPs that are gaining popularity to accelerate compute-intensive application regions. This paper presents extensions to OpenMP that provide that programming model. Our results demonstrate that a high-level programming model can provide accelerated performance comparable to hand-coded implementations in CUDA.

  10. Laser Absorption and Particle Acceleration at the Critical Surface

    NASA Astrophysics Data System (ADS)

    May, J.; Tonge, J.; Mori, W. B.; Fiuza, F.; Fonseca, R.; Silva, L. O.

    2014-10-01

    Using high intensity lasers (I >= 5 ×1019 W /cm2) to accelerate particles at the critical surface offers the potential to deliver high fluence particle beams into dense matter. Potential applications include Fast Ignition Inertial Confinement Fusion, Radiation Pressure Acceleration, and probing high-density matter for basic plasma research. In order to tailor the beam characteristics of laser conversion efficiency, energy spectrum, beam divergence, and accelerated species (ions or electrons) to the given application - and of course to interpret the results of experiments - it is key to have an understanding of the underlying absorption and acceleration mechanisms. Much theoretical and simulation work has been done on this regime in recent years, and although it has become clear that mechanisms often invoked at lower intensities (i.e. JxB and Bruenel heating) are less or unimportant in these systems, debate still exists as to exactly what mechanisms will play the dominant role in laboratory relevant scenarios. We present recent results of simulations with the Particle-in-Cell code OSIRIS which sheds light on these issues. The authors acknowledge the support of the DOE Fusion Science Center for Extreme States of Matter and Fast Ignition Physics under DOE Contract No. FC02-04ER54789 and DOE contracts DE-NA0001833 and DE-SC-0008316, and NSF grant ACI-13398893.

  11. Electron Acceleration by Transient Ion Foreshock Phenomena

    NASA Astrophysics Data System (ADS)

    Wilson, L. B., III; Turner, D. L.

    2015-12-01

    Particle acceleration is a topic of considerable interest in space, laboratory, and astrophysical plasmas as it is a fundamental physical process to all areas of physics. Recent THEMIS [e.g., Turner et al., 2014] and Wind [e.g., Wilson et al., 2013] observations have found evidence for strong particle acceleration at macro- and meso-scale structures and/or pulsations called transient ion foreshock phenomena (TIFP). Ion acceleration has been extensively studied, but electron acceleration has received less attention. Electron acceleration can arise from fundamentally different processes than those affecting ions due to differences in their gyroradii. Electron acceleration is ubiquitous, occurring in the solar corona (e.g., solar flares), magnetic reconnection, at shocks, astrophysical plasmas, etc. We present new results analyzing the dependencies of electron acceleration on the properties of TIFP observed by the THEMIS spacecraft.

  12. Visions for the future of particle accelerators

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-10-01

    The ambitions of accelerator based science, technology and applications far exceed the present accelerator possibilities. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results and visions for the future in the domain of accelerator science and technology in Europe, shown during the final fourth annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution. The main subject is however the vision for the future of particle accelerators and next generation light sources.

  13. Application of local area networks to accelerator control systems at the Stanford Linear Accelerator

    SciTech Connect

    Fox, J.D.; Linstadt, E.; Melen, R.

    1983-03-01

    The history and current status of SLAC's SDLC networks for distributed accelerator control systems are discussed. These local area networks have been used for instrumentation and control of the linear accelerator. Network topologies, protocols, physical links, and logical interconnections are discussed for specific applications in distributed data acquisition and control system, computer networks and accelerator operations.

  14. Final Progress Report - Heavy Ion Accelerator Theory and Simulation

    SciTech Connect

    Haber, Irving

    2009-10-31

    The use of a beam of heavy ions to heat a target for the study of warm dense matter physics, high energy density physics, and ultimately to ignite an inertial fusion pellet, requires the achievement of beam intensities somewhat greater than have traditionally been obtained using conventional accelerator technology. The research program described here has substantially contributed to understanding the basic nonlinear intense-beam physics that is central to the attainment of the requisite intensities. Since it is very difficult to reverse intensity dilution, avoiding excessive dilution over the entire beam lifetime is necessary for achieving the required beam intensities on target. The central emphasis in this research has therefore been on understanding the nonlinear mechanisms that are responsible for intensity dilution and which generally occur when intense space-charge-dominated beams are not in detailed equilibrium with the external forces used to confine them. This is an important area of study because such lack of detailed equilibrium can be an unavoidable consequence of the beam manipulations such as acceleration, bunching, and focusing necessary to attain sufficient intensity on target. The primary tool employed in this effort has been the use of simulation, particularly the WARP code, in concert with experiment, to identify the nonlinear dynamical characteristics that are important in practical high intensity accelerators. This research has gradually made a transition from the study of idealized systems and comparisons with theory, to study the fundamental scaling of intensity dilution in intense beams, and more recently to explicit identification of the mechanisms relevant to actual experiments. This work consists of two categories; work in direct support beam physics directly applicable to NDCX and a larger effort to further the general understanding of space-charge-dominated beam physics.

  15. Applications of Ion Induction Accelerators

    NASA Astrophysics Data System (ADS)

    Barnard, John J.; Briggs*, Richard J.

    As discussed in Chap. 9, the physics of ion induction accelerators has many commonalities with the physics of electron induction accelerators. However, there are important differences, arising because of the different missions of ion machines relative to electron machines and also because the velocity of the ions is usually non-relativistic in these applications. The basic architectures and layout reflects these differences. In Chaps. 6, 7, and 8 a number of examples of electron accelerators and their applications were given, including machines that have already been constructed. In this chapter, we give several examples of potential uses for ion induction accelerators. Although, as of this writing, none of these applications have come to fruition, in the case of heavy ion fusion (HIF) , small scale experiments have been carried out and a sizable effort has been made in laying the groundwork for such an accelerator. A second application, using ion beams for study of High Energy Density Physics (HEDP) or Warm Dense Matter (WDM) physics will soon be realized and the requirements for this machine will be discussed in detail. Also, a concept for a spallation neutron source is discussed in lesser detail.

  16. Recent results and future challenges for large scale Particle-In-Cell simulations of plasma-based accelerator concepts

    SciTech Connect

    Huang, C.; An, W.; Decyk, V.K.; Lu, W.; Mori, W.B.; Tsung, F.S.; Tzoufras, M.; Morshed, S.; Antomsen, T.; Feng, B.; Katsouleas, T; Fonseca, R.A.; Martins, S.F.; Vieira, J.; Silva, L.O.; Geddes, C.G.R.; Cormier-Michel, E; Vay, J.-L.; Esarey, E.; Leemans, W.P.; Bruhwiler, D.L.; Cowan, B.; Cary, J.R.; Paul, K.

    2009-05-01

    The concept and designs of plasma-based advanced accelerators for high energy physics and photon science are modeled in the SciDAC COMPASS project with a suite of Particle-In-Cell codes and simulation techniques including the full electromagnetic model, the envelope model, the boosted frame approach and the quasi-static model. In this paper, we report the progress of the development of these models and techniques and present recent results achieved with large-scale parallel PIC simulations. The simulation needs for modeling the plasma-based advanced accelerator at the energy frontier is discussed and a path towards this goal is outlined.

  17. Transversal Clifford gates on folded surface codes

    NASA Astrophysics Data System (ADS)

    Moussa, Jonathan E.

    2016-10-01

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surface codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. The specific application of these codes to universal quantum computation based on qubit fusion is also discussed.

  18. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  19. Commissioning the GTA accelerator

    SciTech Connect

    Sander, O.R.; Atkins, W.H.; Bolme, G.O.; Bowling, S.; Brown, S.; Cole, R.; Gilpatrick, J.D.; Garnett, R.; Guy, F.W.; Ingalls, W.B.; Johnson, K.F.; Kerstiens, D.; Little, C.; Lohsen, R.A.; Lloyd, S.; Lysenko, W.P.; Mottershead, C.T.; Neuschaefer, G.; Power, J.; Rusthoi, D.P.; Sandoval, D.P. Stevens, R.R. Jr.; Vaughn, G.; Wadlinger, E.A.; Yuan, V.; Connolly, R.; Weiss, R.; Saadatmand, K.

    1992-09-01

    The Ground Test Accelerator (GTA) is supported by the Strategic Defense command as part of their Neutral Particle Beam (NPB) program. Neutral particles have the advantage that in space they are unaffected by the earth`s magnetic field and travel in straight lines unless they enter the earth`s atmosphere and become charged by stripping. Heavy particles are difficult to stop and can probe the interior of space vehicles; hence, NPB can function as a discriminator between warheads and decoys. We are using GTA to resolve the physics and engineering issues related to accelerating, focusing, and steering a high-brightness, high-current H{sup -} beam and then neutralizing it. Our immediate goal is to produce a 24-MeV, 50mA device with a 2% duty factor.

  20. Recent Activities at Tokai Tandem Accelerator

    NASA Astrophysics Data System (ADS)

    Ishii, Tetsuro

    2010-05-01

    Recent activities at the JAEA-Tokai tandem accelerator facility are presented. The terminal voltage of the tandem accelerator reached 19.1 MV by replacing acceleration tubes. The multi-charged positive-ion injector was installed in the terminal of the tandem accelerator, supplying high-current noble-gas ions. A superconducting cavity for low-velocity ions was developed. Radioactive nuclear beams of 8,9Li and fission products, produced by the tandem accelerator and separated by the ISOL, were supplied with experiment. Recent results of nuclear physics experiments are reported.