Sample records for accelerator code group

  1. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  2. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  3. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  4. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  5. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  6. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  7. LEGO: A modular accelerator design code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Donald, M.; Irwin, J.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, themore » code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.« less

  8. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  9. Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi

    A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less

  10. Code comparison for accelerator design and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary inmore » these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.« less

  11. Further Studies of the NRL Collective Particle Accelerator VIA Numerical Modeling with the MAGIC Code.

    DTIC Science & Technology

    1984-08-01

    COLLFCTIVF PAPTTCLE ACCELERATOR VIA NUMERICAL MODFLINC WITH THF MAGIC CODE Robert 1. Darker Auqust 19F4 Final Report for Period I April. qI84 - 30...NUMERICAL MODELING WITH THE MAGIC CODE Robert 3. Barker August 1984 Final Report for Period 1 April 1984 - 30 September 1984 Prepared for: Scientific...Collective Final Report Particle Accelerator VIA Numerical Modeling with April 1 - September-30, 1984 MAGIC Code. 6. PERFORMING ORG. REPORT NUMBER MRC/WDC-R

  12. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  13. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  14. MAPA: an interactive accelerator design code with GUI

    NASA Astrophysics Data System (ADS)

    Bruhwiler, David L.; Cary, John R.; Shasharina, Svetlana G.

    1999-06-01

    The MAPA code is an interactive accelerator modeling and design tool with an X/Motif GUI. MAPA has been developed in C++ and makes full use of object-oriented features. We present an overview of its features and describe how users can independently extend the capabilities of the entire application, including the GUI. For example, a user can define a new model for a focusing or accelerating element. If the appropriate form is followed, and the new element is "registered" with a single line in the specified file, then the GUI will fully support this user-defined element type after it has been compiled and then linked to the existing application. In particular, the GUI will bring up windows for modifying any relevant parameters of the new element type. At present, one can use the GUI for phase space tracking, finding fixed points and generating line plots for the Twiss parameters, the dispersion and the accelerator geometry. The user can define new types of simulations which the GUI will automatically support by providing a menu option to execute the simulation and subsequently rendering line plots of the resulting data.

  15. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  16. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  17. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  18. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model.more » The code is guilt atop the Python interpreter language.« less

  19. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  20. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  1. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  2. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  3. Reliability enhancement of Navier-Stokes codes through convergence acceleration

    NASA Technical Reports Server (NTRS)

    Merkle, Charles L.; Dulikravich, George S.

    1995-01-01

    Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.

  4. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  5. Status and future of the 3D MAFIA group of codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeling, F.; Klatt, R.; Krawzcyk, F.

    1988-12-01

    The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel.more » Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.« less

  6. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  7. LACEwING: A New Moving Group Analysis Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riedel, Adric R.; Blunt, Sarah C.; Faherty, Jacqueline K.

    We present a new nearby young moving group (NYMG) kinematic membership analysis code, LocAting Constituent mEmbers In Nearby Groups (LACEwING), a new Catalog of Suspected Nearby Young Stars, a new list of bona fide members of moving groups, and a kinematic traceback code. LACEwING is a convergence-style algorithm with carefully vetted membership statistics based on a large numerical simulation of the Solar Neighborhood. Given spatial and kinematic information on stars, LACEwING calculates membership probabilities in 13 NYMGs and three open clusters within 100 pc. In addition to describing the inputs, methods, and products of the code, we provide comparisons ofmore » LACEwING to other popular kinematic moving group membership identification codes. As a proof of concept, we use LACEwING to reconsider the membership of 930 stellar systems in the Solar Neighborhood (within 100 pc) that have reported measurable lithium equivalent widths. We quantify the evidence in support of a population of young stars not attached to any NYMGs, which is a possible sign of new as-yet-undiscovered groups or of a field population of young stars.« less

  8. Characteristics of four SPE groups with different origins and acceleration processes

    NASA Astrophysics Data System (ADS)

    Kim, R.-S.; Cho, K.-S.; Lee, J.; Bong, S.-C.; Joshi, A. D.; Park, Y.-D.

    2015-09-01

    Solar proton events (SPEs) can be categorized into four groups based on their associations with flare or CME inferred from onset timings as well as acceleration patterns using multienergy observations. In this study, we have investigated whether there are any typical characteristics of associated events and acceleration sites in each group using 42 SPEs from 1997 to 2012. We find the following: (i) if the proton acceleration starts from a lower energy, a SPE has a higher chance to be a strong event (> 5000 particle flux per unit (pfu)) even if its associated flare and/or CME are not so strong. The only difference between the SPEs associated with flare and CME is the location of the acceleration site. (ii) For the former (Group A), the sites are very low (˜ 1 Rs) and close to the western limb, while the latter (Group C) have relatively higher (mean = 6.05 Rs) and wider acceleration sites. (iii) When the proton acceleration starts from the higher energy (Group B), a SPE tends to be a relatively weak event (< 1000 pfu), although its associated CME is relatively stronger than previous groups. (iv) The SPEs categorized by the simultaneous acceleration in whole energy range within 10 min (Group D) tend to show the weakest proton flux (mean = 327 pfu) in spite of strong associated eruptions. Based on those results, we suggest that the different characteristics of SPEs are mainly due to the different conditions of magnetic connectivity and particle density, which are changed with longitude and height as well as their origin.

  9. Accelerating Mathematics Achievement Using Heterogeneous Grouping

    ERIC Educational Resources Information Center

    Burris, Carol Corbett; Heubert, Jay P.; Levin, Henry M.

    2006-01-01

    This longitudinal study examined the effects of providing an accelerated mathematics curriculum in heterogeneously grouped middle school classes in a diverse suburban school district. A quasi-experimental cohort design was used to evaluate subsequent completion of advanced high school math courses as well as academic achievement. Results showed…

  10. ON THE PROBLEM OF PARTICLE GROUPINGS IN A TRAVELING WAVE LINEAR ACCELERATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhileyko, G.I.

    1957-01-01

    A linear accelerator with traveling'' waves may be used for the production of especially short electron momenta, although in many cases the grouping capacity of the accelerator is not sufficient. Theoretically the case is derived in which grouping of the electrons takes place in the accelerator itself. (With 3 illustrations and 1 Slavic Reference). (TCO)

  11. Accelerator Physics Working Group Summary

    NASA Astrophysics Data System (ADS)

    Li, D.; Uesugi, T.; Wildnerc, E.

    2010-03-01

    The Accelerator Physics Working Group addressed the worldwide R&D activities performed in support of future neutrino facilities. These studies cover R&D activities for Super Beam, Beta Beam and muon-based Neutrino Factory facilities. Beta Beam activities reported the important progress made, together with the research activity planned for the coming years. Discussion sessions were also organized jointly with other working groups in order to define common ground for the optimization of a future neutrino facility. Lessons learned from already operating neutrino facilities provide key information for the design of any future neutrino facility, and were also discussed in this meeting. Radiation damage, remote handling for equipment maintenance and exchange, and primary proton beam stability and monitoring were among the important subjects presented and discussed. Status reports for each of the facility subsystems were presented: proton drivers, targets, capture systems, and muon cooling and acceleration systems. The preferred scenario for each type of possible future facility was presented, together with the challenges and remaining issues. The baseline specification for the muon-based Neutrino Factory was reviewed and updated where required. This report will emphasize new results and ideas and discuss possible changes in the baseline scenarios of the facilities. A list of possible future steps is proposed that should be followed up at NuFact10.

  12. 49 CFR 173.52 - Classification codes and compatibility groups of explosives.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Classification codes and compatibility groups of... Class 1 § 173.52 Classification codes and compatibility groups of explosives. (a) The classification..., consists of the division number followed by the compatibility group letter. Compatibility group letters are...

  13. Supplementing Accelerated Reading with Classwide Interdependent Group-Oriented Contingencies

    ERIC Educational Resources Information Center

    Pappas, Danielle N.; Skinner, Christopher H.; Skinner, Amy L.

    2010-01-01

    An across-groups (classrooms), multiple-baseline design was used to investigate the effects of an interdependent group-oriented contingency on the Accelerated Reader (AR) performance of fourth-grade students. A total of 32 students in three classes participated. Before the study began, an independent group-oriented reward program was being applied…

  14. Equivalent-Groups versus Single-Group Equating Designs for the Accelerated CAT-ASVAB (Computerized Adaptive Test-Armed Services Vocational Aptitude Battery) Project.

    DTIC Science & Technology

    1987-01-01

    DESIGNS FOR THE ACCELERATED CAT -ASVAB * PROJECT Peter H. Stoloff DTIC’- , " SELECTE -NOV 2 3 987 A Division of Hudson Institute CENTER FOR NAVAL ANALYSES...65153M C0031 SI TITLE (Include Security Classification) Equivalent-Groups Versus Single-Group Equating Designs For The Accelerated CAT -ASVAB Project...GROUP ACAP (Accelerated CAT -ASVAB Program), Aptitude tests, ASVAB (Armed 05 10 Services Vocational Aptitude Battery), CAT (Computerized Adaptive Test

  15. A redshift survey of IRAS galaxies. V - The acceleration on the Local Group

    NASA Technical Reports Server (NTRS)

    Strauss, Michael A.; Yahil, Amos; Davis, Marc; Huchra, John P.; Fisher, Karl

    1992-01-01

    The acceleration on the Local Group is calculated based on a full-sky redshift survey of 5288 galaxies detected by IRAS. A formalism is developed to compute the distribution function of the IRAS acceleration for a given power spectrum of initial perturbations. The computed acceleration on the Local Group points 18-28 deg from the direction of the Local Group peculiar velocity vector. The data suggest that the CMB dipole is indeed due to the motion of the Local Group, that this motion is gravitationally induced, and that the distribution of IRAS galaxies on large scales is related to that of dark matter by a simple linear biasing model.

  16. Annual Coded Wire Tag Program; Missing Production Groups, 1996 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastor, Stephen M.

    1997-01-01

    In 1989 the Bonneville Power Administration (BPA) began funding the evaluation of production groups of juvenile anadromous fish not being coded-wire tagged for other programs. These groups were the ''Missing Production Groups''. Production fish released by the U.S. Fish and Wildlife Service (USFWS) without representative coded-wire tags during the 1980's are indicated as blank spaces on the survival graphs in this report. The objectives of the ''Missing Production Groups'' program are: (1) to estimate the total survival of each production group, (2) to estimate the contribution of each production group to various fisheries, and (3) to prepare an annual reportmore » for all USFWS hatcheries in the Columbia River basin. Coded-wire tag recovery information will be used to evaluate the relative success of individual brood stocks. This information can also be used by salmon harvest managers to develop plans to allow the harvest of excess hatchery fish while protecting threatened, endangered, or other stocks of concern. In order to meet these objectives, a minimum of one marked group of fish is necessary for each production release. The level of marking varies according to location, species, and age at release. In general, 50,000 fish are marked with a coded-wire tag (CWT) to represent each production release group at hatcheries below John Day Dam. More than 100,000 fish per group are usually marked at hatcheries above John Day Dam. All fish release information, including marked/unmarked ratios, is reported to the Pacific States Marine Fisheries Commission (PSMFC). Fish recovered in the various fisheries or at the hatcheries are sampled to recover coded-wire tags. This recovery information is also reported to PSMFC.« less

  17. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebe, A.; Leveling, A.; Lu, T.

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less

  18. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    NASA Astrophysics Data System (ADS)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  19. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  20. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  1. Summary Report of Working Group 2: Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-22

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less

  2. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    DOE PAGES

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...

    2016-07-12

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less

  3. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    DOE PAGES

    Luo, Yun

    2015-08-29

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  4. Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System

    NASA Astrophysics Data System (ADS)

    Aizawa, Naoto; Iwasaki, Tomohiko

    2014-06-01

    Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.

  5. Focus Group Research on the Implications of Adopting the Unified English Braille Code

    ERIC Educational Resources Information Center

    Wetzel, Robin; Knowlton, Marie

    2006-01-01

    Five focus groups explored concerns about adopting the Unified English Braille Code. The consensus was that while the proposed changes to the literary braille code would be minor, those to the mathematics braille code would be much more extensive. The participants emphasized that "any code that reduces the number of individuals who can access…

  6. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  7. Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection

    NASA Astrophysics Data System (ADS)

    Alejandro Munoz Sepulveda, Patricio

    2017-04-01

    The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.

  8. [Human tolerance to Coriolis acceleration during exertion of different muscle groups].

    PubMed

    Aĭzikov, G S; Emel'ianov, M D; Ovechkin, V G

    1975-01-01

    The effect of an arbitrary loading of different muscle groups (shoulder, back, legs) and motor acts on the tolerance to Coriolis accelerations was investigated in 140 experiments in which 40 test subjects participated. The accelerations were cumulated and simulated by the Bryanov scheme. Muscle tension was accompanied by a less expressed vestibulo-vegetative reaction and shortening of the recovery period after the development of motion sickness symptoms. The greatest changes were observed during the performance of complex motor acts and tension of shoulder muscles. Possible mechanisms of these effects are discussed.

  9. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  10. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  11. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  12. Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code

    NASA Astrophysics Data System (ADS)

    Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.

    2012-12-01

    We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.

  13. Grouping in Short-Term Memory: Do Oscillators Code the Positions of Items?

    ERIC Educational Resources Information Center

    Ng, Honey L. H.; Maybery, Murray T.

    2005-01-01

    According to several current models of short-term memory, items are retained in order by associating them with positional codes. The models differ as to whether temporal oscillators provide those codes. The authors examined errors in recall of sequences comprising 2 groups of 4 consonants. A critical manipulation was the precise timing of items…

  14. Reference manual for the POISSON/SUPERFISH Group of Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finitemore » number of points on a mesh in the plane.« less

  15. Computational Accelerator Physics. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisognano, J.J.; Mondelli, A.A.

    1997-04-01

    The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less

  16. [Research advances of genomic GYP coding MNS blood group antigens].

    PubMed

    Liu, Chang-Li; Zhao, Wei-Jun

    2012-02-01

    The MNS blood group system includes more than 40 antigens, and the M, N, S and s antigens are the most significant ones in the system. The antigenic determinants of M and N antigens lie on the top of GPA on the surface of red blood cells, while the antigenic determinants of S and s antigens lie on the top of GPB on the surface of red blood cells. The GYPA gene coding GPA and the GYPB gene coding GPB locate at the longarm of chromosome 4 and display 95% homologus sequence, meanwhile both genes locate closely to GYPE gene that did not express product. These three genes formed "GYPA-GYPB-GYPE" structure called GYP genome. This review focuses on the molecular basis of genomic GYP and the variety of GYP genome in the expression of diversity MNS blood group antigens. The molecular basis of Miltenberger hybrid glycophorin polymorphism is specifically expounded.

  17. [Differentiation of coding quality in orthopaedics by special, illustration-oriented case group analysis in the G-DRG System 2005].

    PubMed

    Schütz, U; Reichel, H; Dreinhöfer, K

    2007-01-01

    We introduce a grouping system for clinical practice which allows the separation of DRG coding in specific orthopaedic groups based on anatomic regions, operative procedures, therapeutic interventions and morbidity equivalent diagnosis groups. With this, a differentiated aim-oriented analysis of illustrated internal DRG data becomes possible. The group-specific difference of the coding quality between the DRG groups following primary coding by the orthopaedic surgeon and final coding by the medical controlling is analysed. In a consecutive series of 1600 patients parallel documentation and group-specific comparison of the relevant DRG parameters were carried out in every case after primary and final coding. Analysing the group-specific share in the additional CaseMix coding, the group "spine surgery" dominated, closely followed by the groups "arthroplasty" and "surgery due to infection, tumours, diabetes". Altogether, additional cost-weight-relevant coding was necessary most frequently in the latter group (84%), followed by group "spine surgery" (65%). In DRGs representing conservative orthopaedic treatment documented procedures had nearly no influence on the cost weight. The introduced system of case group analysis in internal DRG documentation can lead to the detection of specific problems in primary coding and cost-weight relevant changes of the case mix. As an instrument for internal process control in the orthopaedic field, it can serve as a communicative interface between an economically oriented classification of the hospital performance and a specific problem solution of the medical staff involved in the department management.

  18. Investigating the adiabatic beam grouping at the NICA accelerator complex

    NASA Astrophysics Data System (ADS)

    Brovko, O. I.; Butenko, A. V.; Grebentsov, A. Yu.; Eliseev, A. V.; Meshkov, I. N.; Svetov, A. L.; Sidorin, A. O.; Slepnev, V. M.

    2016-12-01

    The NICA complex comprises the Booster and Nuclotron synchrotrons for accelerating particle beams to the required energy and the Collider machine, in which particle collisions are investigated. The experimental heavy-ion program deals with ions up to Au+79. The light-ion program deals with polarized deuterons and protons. Grouping of a beam coasting in an ion chamber is required in many parts of the complex. Beam grouping may effectively increase the longitudinal emittance and particle losses. To avoid these negative effects, various regimes of adiabatic grouping have been simulated and dedicated experiments with a deuteron beam have been conducted at the Nuclotron machine. As a result, we are able to construct and optimize the beam-grouping equipment, which provides a capture efficiency near 100% either retaining or varying the harmonic multiplicity of the HF system.

  19. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  20. Automatic detection of lameness in gestating group-housed sows using positioning and acceleration measurements.

    PubMed

    Traulsen, I; Breitenberger, S; Auer, W; Stamer, E; Müller, K; Krieter, J

    2016-06-01

    Lameness is an important issue in group-housed sows. Automatic detection systems are a beneficial diagnostic tool to support management. The aim of the present study was to evaluate data of a positioning system including acceleration measurements to detect lameness in group-housed sows. Data were acquired at the Futterkamp research farm from May 2012 until April 2013. In the gestation unit, 212 group-housed sows were equipped with an ear sensor to sample position and acceleration per sow and second. Three activity indices were calculated per sow and day: path length walked by a sow during the day (Path), number of squares (25×25 cm) visited during the day (Square) and variance of the acceleration measurement during the day (Acc). In addition, data on lameness treatments of the sows and a weekly lameness score were used as reference systems. To determine the influence of a lameness event, all indices were analysed in a linear random regression model. Test day, parity class and day before treatment had a significant influence on all activity indices (P<0.05). In healthy sows, indices Path and Square increased with increasing parity, whereas variance slightly decreased. The indices Path and Square showed a decreasing trend in a 14-day period before a lameness treatment and to a smaller extent before a lameness score of 2 (severe lameness). For the index acceleration, there was no obvious difference between the lame and non-lame periods. In conclusion, positioning and acceleration measurements with ear sensors can be used to describe the activity pattern of sows. However, improvements in sampling rate and analysis techniques should be made for a practical application as an automatic lameness detection system.

  1. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  2. I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.

    PubMed

    Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S

    2017-12-01

    The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  4. Beam breakup in an advanced linear induction accelerator

    DOE PAGES

    Ekdahl, Carl August; Coleman, Joshua Eugene; McCuistian, Brian Trent

    2016-07-01

    Two linear induction accelerators (LIAs) have been in operation for a number of years at the Los Alamos Dual Axis Radiographic Hydrodynamic Test (DARHT) facility. A new multipulse LIA is being developed. We have computationally investigated the beam breakup (BBU) instability in this advanced LIA. In particular, we have explored the consequences of the choice of beam injector energy and the grouping of LIA cells. We find that within the limited range of options presently under consideration for the LIA architecture, there is little adverse effect on the BBU growth. The computational tool that we used for this investigation wasmore » the beam dynamics code linear accelerator model for DARHT (LAMDA). In conclusion, to confirm that LAMDA was appropriate for this task, we first validated it through comparisons with the experimental BBU data acquired on the DARHT accelerators.« less

  5. The Socioaffective Impact of Acceleration and Ability Grouping: Recommendations for Best Practice

    ERIC Educational Resources Information Center

    Neihart, Maureen

    2007-01-01

    Although the academic gains associated with acceleration and peer ability grouping are well documented, resistance to their use for gifted students continues because of concerns that such practices will cause social or emotional harm to students. Results from the broad research indicate that grade skipping, early school entrance, and early…

  6. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  7. Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs

    NASA Astrophysics Data System (ADS)

    Chhotray, Atul; Lazzati, Davide

    2018-05-01

    We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.

  8. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  9. Reimbursement Policies for Carotid Duplex Ultrasound that are Based on International Classification of Diseases Codes May Discourage Testing in High-Yield Groups.

    PubMed

    Go, Michael R; Masterson, Loren; Veerman, Brent; Satiani, Bhagwan

    2016-02-01

    To curb increasing volumes of diagnostic imaging and costs, reimbursement for carotid duplex ultrasound (CDU) is dependent on "appropriate" indications as documented by International Classification of Diseases (ICD) codes entered by ordering physicians. Historically, asymptomatic indications for CDU yield lower rates of abnormal results than symptomatic indications, and consensus documents agree that most asymptomatic indications for CDU are inappropriate. In our vascular laboratory, we perceived an increased rate of incorrect or inappropriate ICD codes. We therefore sought to determine if ICD codes were useful in predicting the frequency of abnormal CDU. We hypothesized that asymptomatic or nonspecific ICD codes would yield a lower rate of abnormal CDU than symptomatic codes, validating efforts to limit reimbursement in asymptomatic, low-yield groups. We reviewed all outpatient CDU done in 2011 at our institution. ICD codes were recorded, and each medical record was then reviewed by a vascular surgeon to determine if the assigned ICD code appropriately reflected the clinical scenario. CDU findings categorized as abnormal (>50% stenosis) or normal (<50% stenosis) were recorded. Each individual ICD code and group 1 (asymptomatic), group 2 (nonhemispheric symptoms), group 3 (hemispheric symptoms), group 4 (preoperative cardiovascular examination), and group 5 (nonspecific) ICD codes were analyzed for correlation with CDU results. Nine hundred ninety-four patients had 74 primary ICD codes listed as indications for CDU. Of assigned ICD codes, 17.4% were deemed inaccurate. Overall, 14.8% of CDU were abnormal. Of the 13 highest frequency ICD codes, only 433.10, an asymptomatic code, was associated with abnormal CDU. Four symptomatic codes were associated with normal CDU; none of the other high frequency codes were associated with CDU result. Patients in group 1 (asymptomatic) were significantly more likely to have an abnormal CDU compared to each of the other groups (P

  10. Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Marquart, Jed E.

    2005-01-01

    The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the

  11. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in

  12. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  13. Accelerator-based validation of shielding codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeitlin, Cary; Heilbronn, Lawrence; Miller, Jack

    2002-08-12

    The space radiation environment poses risks to astronaut health from a diverse set of sources, ranging from low-energy protons and electrons to highly-charged, high-energy atomic nuclei and their associated fragmentation products, including neutrons. The low-energy protons and electrons are the source of most of the radiation dose to Shuttle and ISS crews, while the more energetic particles that comprise the Galactic Cosmic Radiation (protons, He, and heavier nuclei up to Fe) will be the dominant source for crews on long-duration missions outside the earth's magnetic field. Because of this diversity of sources, a broad ground-based experimental effort is required tomore » validate the transport and shielding calculations used to predict doses and dose-equivalents under various mission scenarios. The experimental program of the LBNL group, described here, focuses principally on measurements of charged particle and neutron production in high-energy heavy-ion fragmentation. Other aspects of the program include measurements of the shielding provided by candidate spacesuit materials against low-energy protons (particularly relevant to extra-vehicular activities in low-earth orbit), and the depth-dose relations in tissue for higher-energy protons. The heavy-ion experiments are performed at the Brookhaven National Laboratory's Alternating Gradient Synchrotron and the Heavy-Ion Medical Accelerator in Chiba in Japan. Proton experiments are performed at the Lawrence Berkeley National Laboratory's 88'' Cyclotron with a 55 MeV beam, and at the Loma Linda University Proton Facility with 100 to 250 MeV beam energies. The experimental results are an important component of the overall shielding program, as they allow for simple, well-controlled tests of the models developed to handle the more complex radiation environment in space.« less

  14. Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.

    We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less

  15. EDITORIAL: Laser and plasma accelerators Laser and plasma accelerators

    NASA Astrophysics Data System (ADS)

    Bingham, Robert

    2009-02-01

    This special issue on laser and plasma accelerators illustrates the rapid advancement and diverse applications of laser and plasma accelerators. Plasma is an attractive medium for particle acceleration because of the high electric field it can sustain, with studies of acceleration processes remaining one of the most important areas of research in both laboratory and astrophysical plasmas. The rapid advance in laser and accelerator technology has led to the development of terawatt and petawatt laser systems with ultra-high intensities and short sub-picosecond pulses, which are used to generate wakefields in plasma. Recent successes include the demonstration by several groups in 2004 of quasi-monoenergetic electron beams by wakefields in the bubble regime with the GeV energy barrier being reached in 2006, and the energy doubling of the SLAC high-energy electron beam from 42 to 85 GeV. The electron beams generated by the laser plasma driven wakefields have good spatial quality with energies ranging from MeV to GeV. A unique feature is that they are ultra-short bunches with simulations showing that they can be as short as a few femtoseconds with low-energy spread, making these beams ideal for a variety of applications ranging from novel high-brightness radiation sources for medicine, material science and ultrafast time-resolved radiobiology or chemistry. Laser driven ion acceleration experiments have also made significant advances over the last few years with applications in laser fusion, nuclear physics and medicine. Attention is focused on the possibility of producing quasi-mono-energetic ions with energies ranging from hundreds of MeV to GeV per nucleon. New acceleration mechanisms are being studied, including ion acceleration from ultra-thin foils and direct laser acceleration. The application of wakefields or beat waves in other areas of science such as astrophysics and particle physics is beginning to take off, such as the study of cosmic accelerators considered

  16. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    PubMed

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  17. Shielding calculations for industrial 5/7.5MeV electron accelerators using the MCNP Monte Carlo Code

    NASA Astrophysics Data System (ADS)

    Peri, Eyal; Orion, Itzhak

    2017-09-01

    High energy X-rays from accelerators are used to irradiate food ingredients to prevent growth and development of unwanted biological organisms in food, and by that extend the shelf life of the products. The production of X-rays is done by accelerating 5 MeV electrons and bombarding them into a heavy target (high Z). Since 2004, the FDA has approved using 7.5 MeV energy, providing higher production rates with lower treatments costs. In this study we calculated all the essential data needed for a straightforward concrete shielding design of typical food accelerator rooms. The following evaluation is done using the MCNP Monte Carlo code system: (1) Angular dependence (0-180°) of photon dose rate for 5 MeV and 7.5 MeV electron beams bombarding iron, aluminum, gold, tantalum, and tungsten targets. (2) Angular dependence (0-180°) spectral distribution simulations of bremsstrahlung for gold, tantalum, and tungsten bombarded by 5 MeV and 7.5 MeV electron beams. (3) Concrete attenuation calculations in several photon emission angles for the 5 MeV and 7.5 MeV electron beams bombarding a tantalum target. Based on the simulation, we calculated the expected increase in dose rate for facilities intending to increase the energy from 5 MeV to 7.5 MeV, and the concrete width needed to be added in order to keep the existing dose rate unchanged.

  18. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  19. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  20. Applying graphics user interface ot group technology classification and coding at the Boeing aerospace company

    NASA Astrophysics Data System (ADS)

    Ness, P. H.; Jacobson, H.

    1984-10-01

    The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.

  1. Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)

    NASA Astrophysics Data System (ADS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian

    2017-08-01

    We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.

  2. Intraoperative radiation therapy using mobile electron linear accelerators: report of AAPM Radiation Therapy Committee Task Group No. 72.

    PubMed

    Beddar, A Sam; Biggs, Peter J; Chang, Sha; Ezzell, Gary A; Faddegon, Bruce A; Hensley, Frank W; Mills, Michael D

    2006-05-01

    Intraoperative radiation therapy (IORT) has been customarily performed either in a shielded operating suite located in the operating room (OR) or in a shielded treatment room located within the Department of Radiation Oncology. In both cases, this cancer treatment modality uses stationary linear accelerators. With the development of new technology, mobile linear accelerators have recently become available for IORT. Mobility offers flexibility in treatment location and is leading to a renewed interest in IORT. These mobile accelerator units, which can be transported any day of use to almost any location within a hospital setting, are assembled in a nondedicated environment and used to deliver IORT. Numerous aspects of the design of these new units differ from that of conventional linear accelerators. The scope of this Task Group (TG-72) will focus on items that particularly apply to mobile IORT electron systems. More specifically, the charges to this Task Group are to (i) identify the key differences between stationary and mobile electron linear accelerators used for IORT, (ii) describe and recommend the implementation of an IORT program within the OR environment, (iii) present and discuss radiation protection issues and consequences of working within a nondedicated radiotherapy environment, (iv) describe and recommend the acceptance and machine commissioning of items that are specific to mobile electron linear accelerators, and (v) design and recommend an efficient quality assurance program for mobile systems.

  3. The revised burn diagram and its effect on diagnosis-related group coding.

    PubMed

    Turner, D G; Berger, N; Weiland, A P; Jordan, M H

    1996-01-01

    Diagnosis-related group (DRG) codes for burn injuries are defined by thresholds of the percentage of total body surface area and depth of burns, and by whether surgery, debridement, or grafting or both occurred. This prospective study was designed to determine whether periodic revisions of the burn diagram resulted in more accurate assignment of the International Classification of Diseases and DRG codes. The admission burn diagrams were revised after admission and after each surgical procedure. All areas grafted (deep second-and third-degree burns) were diagrammed as "third-degree," after the current convention that both are biologically the same and require grafting. The multiple diagrams from 82 charts were analyzed to determine the disparities in the percentage of total body surface area burn and the percentage of body surface area third-degree burn. The revised diagrams differed from the admission diagrams in 96.5% of the cases. In 77% of the cases, the revised diagram correctly depicted the percentage of body surface area third-degree burn as confirmed intraoperatively. In 7.3% of the cases, diagram revision changed the DRG code. Documenting wound evolution in this manner allows more accurate assignment of the International Classification of Diseases and DRG codes, assuring optimal reimbursement under the prospective payment system.

  4. Predictive coding accelerates word recognition and learning in the early stages of language development.

    PubMed

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  5. Estimation of dose delivered to accelerator devices from stripping of 18.5 MeV/n 238U ions using the FLUKA code

    NASA Astrophysics Data System (ADS)

    Oranj, Leila Mokhtari; Lee, Hee-Seock; Leitner, Mario Santana

    2017-12-01

    In Korea, a heavy ion accelerator facility (RAON) has been designed for production of rare isotopes. The 90° bending section of this accelerator includes a 1.3- μm-carbon stripper followed by two dipole magnets and other devices. An incident beam is 18.5 MeV/n 238U33+,34+ ions passing through the carbon stripper at the beginning of the section. The two dipoles are tuned to transport 238U ions with specific charge states of 77+, 78+, 79+, 80+ and 81+. Then other ions will be deflected at the bends and cause beam losses. These beam losses are a concern to the devices of transport/beam line. The absorbed dose in devices and prompt dose in the tunnel were calculated using the FLUKA code in order to estimate radiation damage of such devices located at the 90° bending section and for the radiation protection. A novel method to transport multi-charged 238U ions beam was applied in the FLUKA code by using charge distribution of 238U ions after the stripper obtained from LISE++ code. The calculated results showed that the absorbed dose in the devices is influenced by the geometrical arrangement. The maximum dose was observed at the coils of first, second, fourth and fifth quadruples placed after first dipole magnet. The integrated doses for 30 years of operation with 9.5 p μA 238U ions were about 2 MGy for those quadrupoles. In conclusion, the protection of devices particularly, quadruples would be necessary to reduce the damage to devices. Moreover, results showed that the prompt radiation penetrated within the first 60 - 120 cm of concrete.

  6. Radiation Protection Studies for Medical Particle Accelerators using Fluka Monte Carlo Code.

    PubMed

    Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario

    2017-04-01

    Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. © The Author 2016

  7. GPU Optimizations for a Production Molecular Docking Code*

    PubMed Central

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  8. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  9. [Code of ethics for nurses and territory hospital group].

    PubMed

    Danan, Jane-Laure; Giraud-Rochon, François

    2017-09-01

    The publication of the decree relating to the code of ethics for nurses means that the State is producing a text for all nursing professionals, whatever their sector or their mode of practice. However, faced with the standardisation of nursing procedures, the production of a new standard by a government is not a neutral issue. On the one hand, it could constitute a reinforcement of the professional credibility of this corporation; on the other this text becomes enforceable on all nurses and employers. Within a territory hospital group, this reflection must form part of nursing and managerial practices and the relationships with the hospital administration. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  10. New features in the design code Tlie

    NASA Astrophysics Data System (ADS)

    van Zeijts, Johannes

    1993-12-01

    We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.

  11. Two-dimensional spatiotemporal coding of linear acceleration in vestibular nuclei neurons

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Bush, G. A.; Perachio, A. A.

    1993-01-01

    Response properties of vertical (VC) and horizontal (HC) canal/otolith-convergent vestibular nuclei neurons were studied in decerebrate rats during stimulation with sinusoidal linear accelerations (0.2-1.4 Hz) along different directions in the head horizontal plane. A novel characteristic of the majority of tested neurons was the nonzero response often elicited during stimulation along the "null" direction (i.e., the direction perpendicular to the maximum sensitivity vector, Smax). The tuning ratio (Smin gain/Smax gain), a measure of the two-dimensional spatial sensitivity, depended on stimulus frequency. For most vestibular nuclei neurons, the tuning ratio was small at the lowest stimulus frequencies and progressively increased with frequency. Specifically, HC neurons were characterized by a flat Smax gain and an approximately 10-fold increase of Smin gain per frequency decade. Thus, these neurons encode linear acceleration when stimulated along their maximum sensitivity direction, and the rate of change of linear acceleration (jerk) when stimulated along their minimum sensitivity direction. While the Smax vectors were distributed throughout the horizontal plane, the Smin vectors were concentrated mainly ipsilaterally with respect to head acceleration and clustered around the naso-occipital head axis. The properties of VC neurons were distinctly different from those of HC cells. The majority of VC cells showed decreasing Smax gains and small, relatively flat, Smin gains as a function of frequency. The Smax vectors were distributed ipsilaterally relative to the induced (apparent) head tilt. In type I anterior or posterior VC neurons, Smax vectors were clustered around the projection of the respective ipsilateral canal plane onto the horizontal head plane. These distinct spatial and temporal properties of HC and VC neurons during linear acceleration are compatible with the spatiotemporal organization of the horizontal and the vertical/torsional ocular responses

  12. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  13. A portable platform for accelerated PIC codes and its application to GPUs using OpenACC

    NASA Astrophysics Data System (ADS)

    Hariri, F.; Tran, T. M.; Jocksch, A.; Lanti, E.; Progsch, J.; Messmer, P.; Brunner, S.; Gheller, C.; Villard, L.

    2016-10-01

    We present a portable platform, called PIC_ENGINE, for accelerating Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as Graphic Processing Units (GPUs). The aim of this development is efficient simulations on future exascale systems by allowing different parallelization strategies depending on the application problem and the specific architecture. To this end, this platform contains the basic steps of the PIC algorithm and has been designed as a test bed for different algorithmic options and data structures. Among the architectures that this engine can explore, particular attention is given here to systems equipped with GPUs. The study demonstrates that our portable PIC implementation based on the OpenACC programming model can achieve performance closely matching theoretical predictions. Using the Cray XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the one on an Intel Sandy bridge 8-core CPU by a factor of 3.4.

  14. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  15. Perceiving Group Behavior: Sensitive Ensemble Coding Mechanisms for Biological Motion of Human Crowds

    ERIC Educational Resources Information Center

    Sweeny, Timothy D.; Haroz, Steve; Whitney, David

    2013-01-01

    Many species, including humans, display group behavior. Thus, perceiving crowds may be important for social interaction and survival. Here, we provide the first evidence that humans use ensemble-coding mechanisms to perceive the behavior of a crowd of people with surprisingly high sensitivity. Observers estimated the headings of briefly presented…

  16. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  17. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    NASA Astrophysics Data System (ADS)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  18. Shielding analyses for repetitive high energy pulsed power accelerators

    NASA Astrophysics Data System (ADS)

    Jow, H. N.; Rao, D. V.

    Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.

  19. GPU accelerated manifold correction method for spinning compact binaries

    NASA Astrophysics Data System (ADS)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  20. 38 CFR 9.14 - Accelerated Benefits.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...

  1. 38 CFR 9.14 - Accelerated Benefits.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...

  2. 38 CFR 9.14 - Accelerated Benefits.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...

  3. 38 CFR 9.14 - Accelerated Benefits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...

  4. 38 CFR 9.14 - Accelerated Benefits.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accelerated Benefits. 9...' GROUP LIFE INSURANCE AND VETERANS' GROUP LIFE INSURANCE § 9.14 Accelerated Benefits. (a) What is an Accelerated Benefit? An Accelerated Benefit is a payment of a portion of your Servicemembers' Group Life...

  5. A Novel c-VEP BCI Paradigm for Increasing the Number of Stimulus Targets Based on Grouping Modulation With Different Codes.

    PubMed

    Wei, Qingguo; Liu, Yonghui; Gao, Xiaorong; Wang, Yijun; Yang, Chen; Lu, Zongwu; Gong, Huayuan

    2018-06-01

    In an existing brain-computer interface (BCI) based on code modulated visual evoked potentials (c-VEP), a method with which to increase the number of targets without increasing code length has not yet been established. In this paper, a novel c-VEP BCI paradigm, namely, grouping modulation with different codes that have good autocorrelation and crosscorrelation properties, is presented to increase the number of targets and information transfer rate (ITR). All stimulus targets are divided into several groups and each group of targets are modulated by a distinct pseudorandom binary code and its circularly shifting codes. Canonical correlation analysis is applied to each group for yielding a spatial filter and templates for all targets in a group are constructed based on spatially filtered signals. Template matching is applied to each group and the attended target is recognized by finding the maximal correlation coefficients of all groups. Based on the paradigm, a BCI with a total of 48 targets divided into three groups was implemented; 12 and 10 subjects participated in an off-line and a simulated online experiments, respectively. Data analysis of the offline experiment showed that the paradigm can massively increase the number of targets from 16 to 48 at the cost of slight compromise in accuracy (95.49% vs. 92.85%). Results of the simulated online experiment suggested that although the averaged accuracy across subjects of all three groups of targets was lower than that of a single group of targets (91.67% vs. 94.9%), the average ITR of the former was substantially higher than that of the later (181 bits/min vs. 135.6 bit/min) due to the large increase of the number of targets. The proposed paradigm significantly improves the performance of the c-VEP BCI, and thereby facilitates its practical applications such as high-speed spelling.

  6. Advanced Accelerators for Medical Applications

    NASA Astrophysics Data System (ADS)

    Uesaka, Mitsuru; Koyama, Kazuyoshi

    We review advanced accelerators for medical applications with respect to the following key technologies: (i) higher RF electron linear accelerator (hereafter “linac”); (ii) optimization of alignment for the proton linac, cyclotron and synchrotron; (iii) superconducting magnet; (iv) laser technology. Advanced accelerators for medical applications are categorized into two groups. The first group consists of compact medical linacs with high RF, cyclotrons and synchrotrons downsized by optimization of alignment and superconducting magnets. The second group comprises laser-based acceleration systems aimed of medical applications in the future. Laser plasma electron/ion accelerating systems for cancer therapy and laser dielectric accelerating systems for radiation biology are mentioned. Since the second group has important potential for a compact system, the current status of the established energy and intensity and of the required stability are given.

  7. Advanced Accelerators for Medical Applications

    NASA Astrophysics Data System (ADS)

    Uesaka, Mitsuru; Koyama, Kazuyoshi

    We review advanced accelerators for medical applications with respect to the following key technologies: (i) higher RF electron linear accelerator (hereafter "linac"); (ii) optimization of alignment for the proton linac, cyclotron and synchrotron; (iii) superconducting magnet; (iv) laser technology. Advanced accelerators for medical applications are categorized into two groups. The first group consists of compact medical linacs with high RF, cyclotrons and synchrotrons downsized by optimization of alignment and superconducting magnets. The second group comprises laserbased acceleration systems aimed of medical applications in the future. Laser plasma electron/ion accelerating systems for cancer therapy and laser dielectric accelerating systems for radiation biology are mentioned. Since the second group has important potential for a compact system, the current status of the established energy and intensity and of the required stability are given.

  8. Corrigendum to “Accelerated materials evaluation for nuclear applications” [J. Nucl. Mater. 488 (2017) 46–62

    DOE PAGES

    Griffiths, Malcolm; Walters, L.; Greenwood, L. R.; ...

    2017-09-21

    The original article addresses the opportunities and complexities of using materials test reactors with high neutron fluxes to perform accelerated studies of material aging in power reactors operating at lower neutron fluxes and with different neutron flux spectra. Radiation damage and gas production in different reactors have been compared using the code, SPECTER. This code provides a common standard from which to compare neutron damage data generated by different research groups using a variety of reactors. This Corrigendum identifies a few typographical errors. Tables 2 and 3 are included in revised form.

  9. Final Report. An Integrated Partnership to Create and Lead the Solar Codes and Standards Working Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, Andrew

    The DOE grant, “An Integrated Partnership to Create and Lead the Solar Codes and Standards Working Group,” to New Mexico State University created the Solar America Board for Codes and Standards (Solar ABCs). From 2007 – 2013 with funding from this grant, Solar ABCs identified current issues, established a dialogue among key stakeholders, and catalyzed appropriate activities to support the development of codes and standards that facilitated the installation of high quality, safe photovoltaic systems. Solar ABCs brought the following resources to the PV stakeholder community; Formal coordination in the planning or revision of interrelated codes and standards removing “stovemore » pipes” that have only roofing experts working on roofing codes, PV experts on PV codes, fire enforcement experts working on fire codes, etc.; A conduit through which all interested stakeholders were able to see the steps being taken in the development or modification of codes and standards and participate directly in the processes; A central clearing house for new documents, standards, proposed standards, analytical studies, and recommendations of best practices available to the PV community; A forum of experts that invites and welcomes all interested parties into the process of performing studies, evaluating results, and building consensus on standards and code-related topics that affect all aspects of the market; and A biennial gap analysis to formally survey the PV community to identify needs that are unmet and inhibiting the market and necessary technical developments.« less

  10. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The Com

  11. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  12. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system

  13. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Warren

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi

  14. The effect of cost construction based on either DRG or ICD-9 codes or risk group stratification on the resulting cost-effectiveness ratios.

    PubMed

    Chumney, Elinor C G; Biddle, Andrea K; Simpson, Kit N; Weinberger, Morris; Magruder, Kathryn M; Zelman, William N

    2004-01-01

    As cost-effectiveness analyses (CEAs) are increasingly used to inform policy decisions, there is a need for more information on how different cost determination methods affect cost estimates and the degree to which the resulting cost-effectiveness ratios (CERs) may be affected. The lack of specificity of diagnosis-related groups (DRGs) could mean that they are ill-suited for costing applications in CEAs. Yet, the implications of using International Classification of Diseases-9th edition (ICD-9) codes or a form of disease-specific risk group stratification instead of DRGs has yet to be clearly documented. To demonstrate the implications of different disease coding mechanisms on costs and the magnitude of error that could be introduced in head-to-head comparisons of resulting CERs. We based our analyses on a previously published Markov model for HIV/AIDS therapies. We used the Healthcare Cost and Utilisation Project Nationwide Inpatient Sample (HCUP-NIS) data release 6, which contains all-payer data on hospital inpatient stays from selected states. We added costs for the mean number of hospitalisations, derived from analyses based on either DRG or ICD-9 codes or risk group stratification cost weights, to the standard outpatient and prescription drug costs to yield an estimate of total charges for each AIDS-defining illness (ADI). Finally, we estimated the Markov model three times with the appropriate ADI cost weights to obtain CERs specific to the use of either DRG or ICD-9 codes or risk group. Contrary to expectations, we found that the choice of coding/grouping assumptions that are disease-specific by either DRG codes, ICD-9 codes or risk group resulted in very similar CER estimates for highly active antiretroviral therapy. The large variations in the specific ADI cost weights across the three different coding approaches was especially interesting. However, because no one approach produced consistently higher estimates than the others, the Markov model's weighted

  15. Acceleration of Semiempirical QM/MM Methods through Message Passage Interface (MPI), Hybrid MPI/Open Multiprocessing, and Self-Consistent Field Accelerator Implementations.

    PubMed

    Ojeda-May, Pedro; Nam, Kwangho

    2017-08-08

    The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).

  16. CFD Code Survey for Thrust Chamber Application

    NASA Technical Reports Server (NTRS)

    Gross, Klaus W.

    1990-01-01

    In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.

  17. Direct Laser Acceleration in Laser Wakefield Accelerators

    NASA Astrophysics Data System (ADS)

    Shaw, J. L.; Froula, D. H.; Marsh, K. A.; Joshi, C.; Lemos, N.

    2017-10-01

    The direct laser acceleration (DLA) of electrons in a laser wakefield accelerator (LWFA) has been investigated. We show that when there is a significant overlap between the drive laser and the trapped electrons in a LWFA cavity, the accelerating electrons can gain energy from the DLA mechanism in addition to LWFA. The properties of the electron beams produced in a LWFA, where the electrons are injected by ionization injection, have been investigated using particle-in-cell (PIC) code simulations. Particle tracking was used to demonstrate the presence of DLA in LWFA. Further PIC simulations comparing LWFA with and without DLA show that the presence of DLA can lead to electron beams that have maximum energies that exceed the estimates given by the theory for the ideal blowout regime. The magnitude of the contribution of DLA to the energy gained by the electron was found to be on the order of the LWFA contribution. The presence of DLA in a LWFA can also lead to enhanced betatron oscillation amplitudes and increased divergence in the direction of the laser polarization. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  18. Empirical evidence for site coefficients in building code provisions

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  19. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  20. Ensemble coding of face identity is not independent of the coding of individual identity.

    PubMed

    Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina

    2018-06-01

    Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.

  1. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  2. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  3. Annual Coded Wire Tag Program; Oregon Missing Production Groups, 1997 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Mark A.; Mallette, Christine; Murray, William M.

    1998-03-01

    This annual report is in fulfillment of contract obligations with Bonneville Power Administration which is the funding source for the Oregon Department of Fish and Wildlife's Annual Coded Wire Tag Program - Oregon Missing Production Groups Project. Tule stock fall chinook were caught primarily in British Columbia and Washington ocean, and Oregon freshwater fisheries. Up-river bright stock fall chinook contributed primarily to Alaska and British Columbia ocean commercial, and Columbia River gillnet and other freshwater fisheries. Contribution of Rogue stock fall chinook released in the lower Columbia River occurred primarily in Oregon ocean commercial and Columbia river gillnet fisheries. Willamettemore » stock spring chinook contributed primarily to Alaska and British Columbia ocean commercial, Oregon freshwater sport and Columbia River gillnet fisheries. Willamette stock spring chinook released by CEDC contributed to similar ocean fisheries, but had much higher catch in gillnet fisheries than the same stocks released in the Willamette system. Up-river stocks of spring chinook contributed almost exclusively to Columbia River sport fisheries and other freshwater recovery areas. The up-river stocks of Columbia River summer steelhead contributed primarily to the Columbia River gillnet and other freshwater fisheries. Coho ocean fisheries from Washington to California were closed or very limited from 1994 through 1997 (1991 through 1994 broods). This has resulted in a greater average percent of catch for other fishery areas. Coho stocks released by ODFW below Bonneville Dam contributed mainly to Oregon and Washington ocean, Columbia Gillnet and other freshwater fisheries. Coho stocks released in the Klaskanine River and Youngs Bay area had similar ocean catch, but much higher contribution to gillnet fisheries than the other coho releases. Coho stocks released above Bonneville Dam had similar contribution to ocean fisheries as other coho releases. However, they

  4. Chaotic dynamics in accelerator physics

    NASA Astrophysics Data System (ADS)

    Cary, J. R.

    1992-11-01

    Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.

  5. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  6. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  7. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, A.M.M.; Paulson, C.C.; Peacock, M.A.

    1995-10-01

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G.H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decisionmore » has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  8. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.

    1995-09-15

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. Amore » decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  9. MID-INFRARED EVIDENCE FOR ACCELERATED EVOLUTION IN COMPACT GROUP GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Lisa May; Johnson, Kelsey E.; Gallagher, Sarah C.

    2010-11-15

    Compact galaxy groups are at the extremes of the group environment, with high number densities and low velocity dispersions that likely affect member galaxy evolution. To explore the impact of this environment in detail, we examine the distribution in the mid-infrared (MIR) 3.6-8.0 {mu}m color space of 42 galaxies from 12 Hickson compact groups (HCGs) in comparison with several control samples, including the LVL+SINGS galaxies, interacting galaxies, and galaxies from the Coma Cluster. We find that the HCG galaxies are strongly bimodal, with statistically significant evidence for a gap in their distribution. In contrast, none of the other samples showmore » such a marked gap, and only galaxies in the Coma infall region have a distribution that is statistically consistent with the HCGs in this parameter space. To further investigate the cause of the HCG gap, we compare the galaxy morphologies of the HCG and LVL+SINGS galaxies, and also probe the specific star formation rate (SSFR) of the HCG galaxies. While galaxy morphology in HCG galaxies is strongly linked to position with MIR color space, the more fundamental property appears to be the SSFR, or star formation rate normalized by stellar mass. We conclude that the unusual MIR color distribution of HCG galaxies is a direct product of their environment, which is most similar to that of the Coma infall region. In both cases, galaxy densities are high, but gas has not been fully processed or stripped. We speculate that the compact group environment fosters accelerated evolution of galaxies from star-forming and neutral gas-rich to quiescent and neutral gas-poor, leaving few members in the MIR gap at any time.« less

  10. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  11. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  12. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  13. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  14. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  15. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  16. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  17. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  18. Multiple trellis coded modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)

    1990-01-01

    A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.

  19. Codes, Code-Switching, and Context: Style and Footing in Peer Group Bilingual Play

    ERIC Educational Resources Information Center

    Kyratzis, Amy; Tang, Ya-Ting; Koymen, S. Bahar

    2009-01-01

    According to Bernstein (A sociolinguistic approach to socialization; with some reference to educability, Basil Blackwell Ltd., 1972), middle-class parents transmit an elaborated code to their children that relies on verbal means, rather than paralinguistic devices or shared assumptions, to express meanings. Bernstein's ideas were used to argue…

  20. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac

  1. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  2. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  3. Vibration acceleration promotes bone formation in rodent models

    PubMed Central

    Uchida, Ryohei; Nakata, Ken; Kawano, Fuminori; Yonetani, Yasukazu; Ogasawara, Issei; Nakai, Naoya; Mae, Tatsuo; Matsuo, Tomohiko; Tachibana, Yuta; Yokoi, Hiroyuki; Yoshikawa, Hideki

    2017-01-01

    All living tissues and cells on Earth are subject to gravitational acceleration, but no reports have verified whether acceleration mode influences bone formation and healing. Therefore, this study was to compare the effects of two acceleration modes, vibration and constant (centrifugal) accelerations, on bone formation and healing in the trunk using BMP 2-induced ectopic bone formation (EBF) mouse model and a rib fracture healing (RFH) rat model. Additionally, we tried to verify the difference in mechanism of effect on bone formation by accelerations between these two models. Three groups (low- and high-magnitude vibration and control-VA groups) were evaluated in the vibration acceleration study, and two groups (centrifuge acceleration and control-CA groups) were used in the constant acceleration study. In each model, the intervention was applied for ten minutes per day from three days after surgery for eleven days (EBF model) or nine days (RFH model). All animals were sacrificed the day after the intervention ended. In the EBF model, ectopic bone was evaluated by macroscopic and histological observations, wet weight, radiography and microfocus computed tomography (micro-CT). In the RFH model, whole fracture-repaired ribs were excised with removal of soft tissue, and evaluated radiologically and histologically. Ectopic bones in the low-magnitude group (EBF model) had significantly greater wet weight and were significantly larger (macroscopically and radiographically) than those in the other two groups, whereas the size and wet weight of ectopic bones in the centrifuge acceleration group showed no significant difference compared those in control-CA group. All ectopic bones showed calcified trabeculae and maturated bone marrow. Micro-CT showed that bone volume (BV) in the low-magnitude group of EBF model was significantly higher than those in the other two groups (3.1±1.2mm3 v.s. 1.8±1.2mm3 in high-magnitude group and 1.3±0.9mm3 in control-VA group), but BV in the

  4. Vibration acceleration promotes bone formation in rodent models.

    PubMed

    Uchida, Ryohei; Nakata, Ken; Kawano, Fuminori; Yonetani, Yasukazu; Ogasawara, Issei; Nakai, Naoya; Mae, Tatsuo; Matsuo, Tomohiko; Tachibana, Yuta; Yokoi, Hiroyuki; Yoshikawa, Hideki

    2017-01-01

    All living tissues and cells on Earth are subject to gravitational acceleration, but no reports have verified whether acceleration mode influences bone formation and healing. Therefore, this study was to compare the effects of two acceleration modes, vibration and constant (centrifugal) accelerations, on bone formation and healing in the trunk using BMP 2-induced ectopic bone formation (EBF) mouse model and a rib fracture healing (RFH) rat model. Additionally, we tried to verify the difference in mechanism of effect on bone formation by accelerations between these two models. Three groups (low- and high-magnitude vibration and control-VA groups) were evaluated in the vibration acceleration study, and two groups (centrifuge acceleration and control-CA groups) were used in the constant acceleration study. In each model, the intervention was applied for ten minutes per day from three days after surgery for eleven days (EBF model) or nine days (RFH model). All animals were sacrificed the day after the intervention ended. In the EBF model, ectopic bone was evaluated by macroscopic and histological observations, wet weight, radiography and microfocus computed tomography (micro-CT). In the RFH model, whole fracture-repaired ribs were excised with removal of soft tissue, and evaluated radiologically and histologically. Ectopic bones in the low-magnitude group (EBF model) had significantly greater wet weight and were significantly larger (macroscopically and radiographically) than those in the other two groups, whereas the size and wet weight of ectopic bones in the centrifuge acceleration group showed no significant difference compared those in control-CA group. All ectopic bones showed calcified trabeculae and maturated bone marrow. Micro-CT showed that bone volume (BV) in the low-magnitude group of EBF model was significantly higher than those in the other two groups (3.1±1.2mm3 v.s. 1.8±1.2mm3 in high-magnitude group and 1.3±0.9mm3 in control-VA group), but BV in the

  5. Phase II evaluation of clinical coding schemes: completeness, taxonomy, mapping, definitions, and clarity. CPRI Work Group on Codes and Structures.

    PubMed

    Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J

    1997-01-01

    To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity

  6. Corkscrew Motion of an Electron Beam due to Coherent Variations in Accelerating Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August

    2016-09-13

    Corkscrew motion results from the interaction of fluctuations of beam electron energy with accidental magnetic dipoles caused by misalignment of the beam transport solenoids. Corkscrew is a serious concern for high-current linear induction accelerators (LIA). A simple scaling law for corkscrew amplitude derived from a theory based on a constant-energy beam coasting through a uniform magnetic field has often been used to assess LIA vulnerability to this effect. We use a beam dynamics code to verify that this scaling also holds for an accelerated beam in a non-uniform magnetic field, as in a real accelerator. Results of simulations with thismore » code are strikingly similar to measurements on one of the LIAs at Los Alamos National Laboratory.« less

  7. Modeling radiation belt dynamics using a 3-D layer method code

    NASA Astrophysics Data System (ADS)

    Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.

    2017-08-01

    A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.

  8. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  9. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  10. Dissemination and support of ARGUS for accelerator applications. Technical progress report, April 24, 1991--January 20, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  11. Accurate and efficient spin integration for particle accelerators

    DOE PAGES

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less

  12. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  13. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  14. pycola: N-body COLA method code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias

    2015-09-01

    pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

  15. Use of color-coded sleeve shutters accelerates oscillograph channel selection

    NASA Technical Reports Server (NTRS)

    Bouchlas, T.; Bowden, F. W.

    1967-01-01

    Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.

  16. Teaching Qualitative Research: Experiential Learning in Group-Based Interviews and Coding Assignments

    ERIC Educational Resources Information Center

    DeLyser, Dydia; Potter, Amy E.

    2013-01-01

    This article describes experiential-learning approaches to conveying the work and rewards involved in qualitative research. Seminar students interviewed one another, transcribed or took notes on those interviews, shared those materials to create a set of empirical materials for coding, developed coding schemes, and coded the materials using those…

  17. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  18. Laser-driven dielectric electron accelerator for radiobiology researches

    NASA Astrophysics Data System (ADS)

    Koyama, Kazuyoshi; Matsumura, Yosuke; Uesaka, Mitsuru; Yoshida, Mitsuhiro; Natsui, Takuya; Aimierding, Aimidula

    2013-05-01

    In order to estimate the health risk associated with a low dose radiation, the fundamental process of the radiation effects in a living cell must be understood. It is desired that an electron bunch or photon pulse precisely knock a cell nucleus and DNA. The required electron energy and electronic charge of the bunch are several tens keV to 1 MeV and 0.1 fC to 1 fC, respectively. The smaller beam size than micron is better for the precise observation. Since the laser-driven dielectric electron accelerator seems to suite for the compact micro-beam source, a phase-modulation-masked-type laser-driven dielectric accelerator was studied. Although the preliminary analysis made a conclusion that a grating period and an electron speed must satisfy the matching condition of LG/λ = v/c, a deformation of a wavefront in a pillar of the grating relaxed the matching condition and enabled the slow electron to be accelerated. The simulation results by using the free FDTD code, Meep, showed that the low energy electron of 20 keV felt the acceleration field strength of 20 MV/m and gradually felt higher field as the speed was increased. Finally the ultra relativistic electron felt the field strength of 600 MV/m. The Meep code also showed that a length of the accelerator to get energy of 1 MeV was 3.8 mm, the required laser power and energy were 11 GW and 350 mJ, respectively. Restrictions on the laser was eased by adopting sequential laser pulses. If the accelerator is illuminated by sequential N pulses, the pulse power, pulse width and the pulse energy are reduced to 1/N, 1/N and 1/N2, respectively. The required laser power per pulse is estimated to be 2.2 GW when ten pairs of sequential laser pulse is irradiated.

  19. Seismic site coefficients and acceleration design response spectra based on conditions in South Carolina : final report.

    DOT National Transportation Integrated Search

    2014-11-15

    The simplified procedure in design codes for determining earthquake response spectra involves : estimating site coefficients to adjust available rock accelerations to site accelerations. Several : investigators have noted concerns with the site coeff...

  20. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  1. Zebra: An advanced PWR lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, L.; Wu, H.; Zheng, Y.

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less

  2. Empirical evidence for acceleration-dependent amplification factors

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-specific amplification factors, Fa and Fv, used in current U.S. building codes decrease with increasing base acceleration level as implied by the Loma Prieta earthquake at 0.1g and extrapolated using numerical models and laboratory results. The Northridge earthquake recordings of 17 January 1994 and subsequent geotechnical data permit empirical estimates of amplification at base acceleration levels up to 0.5g. Distance measures and normalization procedures used to infer amplification ratios from soil-rock pairs in predetermined azimuth-distance bins significantly influence the dependence of amplification estimates on base acceleration. Factors inferred using a hypocentral distance norm do not show a statistically significant dependence on base acceleration. Factors inferred using norms implied by the attenuation functions of Abrahamson and Silva show a statistically significant decrease with increasing base acceleration. The decrease is statistically more significant for stiff clay and sandy soil (site class D) sites than for stiffer sites underlain by gravely soils and soft rock (site class C). The decrease in amplification with increasing base acceleration is more pronounced for the short-period amplification factor, Fa, than for the midperiod factor, Fv.

  3. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  4. UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.

    A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)

  5. Accelerant-related burns and drug abuse: Challenging combination.

    PubMed

    Leung, Leslie T F; Papp, Anthony

    2018-05-01

    Accelerants are flammable substances that may cause explosion when added to existing fires. The relationships between drug abuse and accelerant-related burns are not well elucidated in the literature. Of these burns, a portion is related to drug manufacturing, which have been shown to be associated with increased burn complications. 1) To evaluate the demographics and clinical outcomes of accelerant-related burns in a Provincial Burn Centre. 2) To compare the clinical outcomes with a control group of non-accelerant related burns. 3) To analyze a subgroup of patients with history of drug abuse and drug manufacturing. Retrospective case control study. Patient data associated with accelerant-related burns from 2009 to 2014 were obtained from the British Columbia Burn Registry. These patients were compared with a control group of non-accelerant related burns. Clinical outcomes that were evaluated include inhalational injury, ICU length of stay, ventilator support, surgeries needed, and burn complications. Chi-square test was used to evaluate categorical data and Student's t-test was used to evaluate mean quantitative data with the p value set at 0.05. A logistic regression model was used to evaluate factors affecting burn complications. Accelerant-related burns represented 28.2% of all burn admissions (N=532) from 2009 to 2014. The accelerant group had higher percentage of patients with history of drug abuse and was associated with higher TBSA burns, ventilator support, ICU stay and pneumonia rates compared to the non-accelerant group. Within the accelerant group, there was no difference in clinical outcomes amongst people with or without history of drug abuse. Four cases were associated with methamphetamine manufacturing, all of which underwent ICU stay and ventilator support. Accelerant-related burns cause significant burden to the burn center. A significant proportion of these patients have history of drug abuse. Copyright © 2017 Elsevier Ltd and ISBI. All rights

  6. A preliminary design of the collinear dielectric wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Zholents, A.; Gai, W.; Doran, S.; Lindberg, R.; Power, J. G.; Strelnikov, N.; Sun, Y.; Trakhtenberg, E.; Vasserman, I.; Jing, C.; Kanareykin, A.; Li, Y.; Gao, Q.; Shchegolkov, D. Y.; Simakov, E. I.

    2016-09-01

    A preliminary design of the multi-meter long collinear dielectric wakefield accelerator that achieves a highly efficient transfer of the drive bunch energy to the wakefields and to the witness bunch is considered. It is made from 0.5 m long accelerator modules containing a vacuum chamber with dielectric-lined walls, a quadrupole wiggler, an rf coupler, and BPM assembly. The single bunch breakup instability is a major limiting factor for accelerator efficiency, and the BNS damping is applied to obtain the stable multi-meter long propagation of a drive bunch. Numerical simulations using a 6D particle tracking computer code are performed and tolerances to various errors are defined.

  7. A language of health in action: Read Codes, classifications and groupings.

    PubMed Central

    Stuart-Buttle, C. D.; Read, J. D.; Sanderson, H. F.; Sutton, Y. M.

    1996-01-01

    A cornerstone of the Information Management and Technology Strategy of the National Health Service's (NHS) Executive is fully operational, person-based clinical information systems, from which flow all of the data needed for direct and indirect care of patients by healthcare providers, and local and national management of the NHS. The currency of these data flows are firstly Read-coded clinical terms, secondly the classifications, the International, Classification of Disease and Health Related Problems, 10th Revision (ICD-10) and The Office of Population Censuses and Surveys Classification of Surgical Operations and Procedures, 4th Revision (OPCS-4), and thirdly Healthcare Resource Groups and Health Benefit Groups, all of which together are called the "language of health", an essential element of the electronic clinical record. This paper briefly describes the three main constituents of the language, and how, together with person-based, fully operational clinical information systems, it enables more effective and efficient healthcare delivery. It also describes how the remaining projects of the IM&T Strategy complete the key components necessary to provide the systems that will enable the flow of person-based data, collected once at the point of care and shared amongst all legitimate users via the electronic patient record. PMID:8947631

  8. A multicenter collaborative approach to reducing pediatric codes outside the ICU.

    PubMed

    Hayes, Leslie W; Dobyns, Emily L; DiGiovine, Bruno; Brown, Ann-Marie; Jacobson, Sharon; Randall, Kelly H; Wathen, Beth; Richard, Heather; Schwab, Carolyn; Duncan, Kathy D; Thrasher, Jodi; Logsdon, Tina R; Hall, Matthew; Markovitz, Barry

    2012-03-01

    The Child Health Corporation of America formed a multicenter collaborative to decrease the rate of pediatric codes outside the ICU by 50%, double the days between these events, and improve the patient safety culture scores by 5 percentage points. A multidisciplinary pediatric advisory panel developed a comprehensive change package of process improvement strategies and measures for tracking progress. Learning sessions, conference calls, and data submission facilitated collaborative group learning and implementation. Twenty Child Health Corporation of America hospitals participated in this 12-month improvement project. Each hospital identified at least 1 noncritical care target unit in which to implement selected elements of the change package. Strategies to improve prevention, detection, and correction of the deteriorating patient ranged from relatively simple, foundational changes to more complex, advanced changes. Each hospital selected a broad range of change package elements for implementation using rapid-cycle methodologies. The primary outcome measure was reduction in codes per 1000 patient days. Secondary outcomes were days between codes and change in patient safety culture scores. Code rate for the collaborative did not decrease significantly (3% decrease). Twelve hospitals reported additional data after the collaborative and saw significant improvement in code rates (24% decrease). Patient safety culture scores improved by 4.5% to 8.5%. A complex process, such as patient deterioration, requires sufficient time and effort to achieve improved outcomes and create a deeply embedded culture of patient safety. The collaborative model can accelerate improvements achieved by individual institutions.

  9. 2,2'-Biphenols via protecting group-free thermal or microwave-accelerated Suzuki-Miyaura coupling in water.

    PubMed

    Schmidt, Bernd; Riemer, Martin; Karras, Manfred

    2013-09-06

    User-friendly protocols for the protecting group-free synthesis of 2,2'-biphenols via Suzuki-Miyaura coupling of o-halophenols and o-boronophenol are presented. The reactions proceed in water in the presence of simple additives such as K2CO3, KOH, KF, or TBAF and with commercially available Pd/C as precatalyst. Expensive or laboriously synthesized ligands or other additives are not required. In the case of bromophenols, efficient rate acceleration and short reaction times were accomplished by microwave irradiation.

  10. Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations

    DTIC Science & Technology

    2017-05-08

    NUMBER (Include area code) 08 May 2017 Briefing Charts 05 April 2017 - 08 May 2017 Using Kokkos for Performant Cross-Platform Acceleration of Liquid ...ERC Incorporated RQRC AFRL-West Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations 2DISTRIBUTION A: Approved for... Liquid Rocket Combustion Simulation SPACE simulation of rotating detonation engine (courtesy of Dr. Christopher Lietz) 3DISTRIBUTION A: Approved

  11. The effects of resisted sprint training on acceleration performance and kinematics in soccer, rugby union, and Australian football players.

    PubMed

    Spinks, Christopher D; Murphy, Aron J; Spinks, Warwick L; Lockie, Robert G

    2007-02-01

    Acceleration is a significant feature of game-deciding situations in the various codes of football. However little is known about the acceleration characteristics of football players, the effects of acceleration training, or the effectiveness of different training modalities. This study examined the effects of resisted sprint (RS) training (weighted sled towing) on acceleration performance (0-15 m), leg power (countermovement jump [CMJ], 5-bound test [5BT], and 50-cm drop jump [50DJ]), gait (foot contact time, stride length, stride frequency, step length, and flight time), and joint (shoulder, elbow, hip, and knee) kinematics in men (N = 30) currently playing soccer, rugby union, or Australian football. Gait and kinematic measurements were derived from the first and second strides of an acceleration effort. Participants were randomly assigned to 1 of 3 treatment conditions: (a) 8-week sprint training of two 1-h sessions x wk(-1) plus RS training (RS group, n = 10), (b) 8-week nonresisted sprint training program of two 1-h sessions x wk(-1) (NRS group, n = 10), or (c) control (n = 10). The results indicated that an 8-week RS training program (a) significantly improves acceleration and leg power (CMJ and 5BT) performance but is no more effective than an 8-week NRS training program, (b) significantly improves reactive strength (50DJ), and (c) has minimal impact on gait and upper- and lower-body kinematics during acceleration performance compared to an 8-week NRS training program. These findings suggest that RS training will not adversely affect acceleration kinematics and gait. Although apparently no more effective than NRS training, this training modality provides an overload stimulus to acceleration mechanics and recruitment of the hip and knee extensors, resulting in greater application of horizontal power.

  12. The gene coding for small ribosomal subunit RNA in the basidiomycete Ustilago maydis contains a group I intron.

    PubMed Central

    De Wachter, R; Neefs, J M; Goris, A; Van de Peer, Y

    1992-01-01

    The nucleotide sequence of the gene coding for small ribosomal subunit RNA in the basidiomycete Ustilago maydis was determined. It revealed the presence of a group I intron with a length of 411 nucleotides. This is the third occurrence of such an intron discovered in a small subunit rRNA gene encoded by a eukaryotic nuclear genome. The other two occurrences are in Pneumocystis carinii, a fungus of uncertain taxonomic status, and Ankistrodesmus stipitatus, a green alga. The nucleotides of the conserved core structure of 101 group I intron sequences present in different genes and genome types were aligned and their evolutionary relatedness was examined. This revealed a cluster including all group I introns hitherto found in eukaryotic nuclear genes coding for small and large subunit rRNAs. A secondary structure model was designed for the area of the Ustilago maydis small ribosomal subunit RNA precursor where the intron is situated. It shows that the internal guide sequence pairing with the intron boundaries fits between two helices of the small subunit rRNA, and that minimal rearrangement of base pairs suffices to achieve the definitive secondary structure of the 18S rRNA upon splicing. PMID:1561081

  13. A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Su, T. Y.; Kao, T. J.

    1991-01-01

    This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.

  14. [Variation of CAG repeats in coding region of ATXN2 gene in different ethnic groups].

    PubMed

    Chen, Xiao-Chen; Sun, Hao; Mi, Dong-Qing; Huang, Xiao-Qin; Lin, Ke-Qin; Yi, Wen; Yu, Liang; Shi, Lei; Shi, Li; Yang, Zhao-Qing; Chu, Jia-You

    2011-04-01

    Toinvestigate CAG repeats variation of ATXN2 gene coding region in six ethnic groups that live in comparatively different environments, to evaluate whether these variations are under positive selection, and to find factors driving selection effects, 291 unrelated healthy individuals were collected from six ethnic groups and their STR geneotyping was performed. The frequencies of alleles and genotypes were counted and thereby Slatkin's linearized Fst values were calculated. The UPGMA tree against this gene was constructed. The MDS analysis among these groups was carried out as well. The results from the linearized Fst values indicated that there were significant evolutionary differences of the STR in ATXN2 gene between Hui and Yi groups, but not among the other 4 groups. Further analysis was performed by combining our data with published data obtained from other groups. These results indicated that there were significant differences between Japanese and other groups including Hui, Hani, Yunnan Mongolian, and Inner Mongolian. Both Hui and Mongolian from Inner Mongolia were significantly different from Han. In conclusion, the six ethnic groups had their own distribution characterizations of allelic frequencies of ATXN2 STR, and the potential cause of frequency changes in rare alleles could be the consequence of positive selection.

  15. A Peer Helpers Code of Behavior.

    ERIC Educational Resources Information Center

    de Rosenroll, David A.

    This document presents a guide for developing a peer helpers code of behavior. The first section discusses issues relevant to the trainers. These issues include whether to give a model directly to the group or whether to engender "ownership" of the code by the group; timing of introduction of the code; and addressing the issue of…

  16. Particle acceleration at a reconnecting magnetic separator

    NASA Astrophysics Data System (ADS)

    Threlfall, J.; Neukirch, T.; Parnell, C. E.; Eradat Oskoui, S.

    2015-02-01

    Context. While the exact acceleration mechanism of energetic particles during solar flares is (as yet) unknown, magnetic reconnection plays a key role both in the release of stored magnetic energy of the solar corona and the magnetic restructuring during a flare. Recent work has shown that special field lines, called separators, are common sites of reconnection in 3D numerical experiments. To date, 3D separator reconnection sites have received little attention as particle accelerators. Aims: We investigate the effectiveness of separator reconnection as a particle acceleration mechanism for electrons and protons. Methods: We study the particle acceleration using a relativistic guiding-centre particle code in a time-dependent kinematic model of magnetic reconnection at a separator. Results: The effect upon particle behaviour of initial position, pitch angle, and initial kinetic energy are examined in detail, both for specific (single) particle examples and for large distributions of initial conditions. The separator reconnection model contains several free parameters, and we study the effect of changing these parameters upon particle acceleration, in particular in view of the final particle energy ranges that agree with observed energy spectra.

  17. Improving coding accuracy in an academic practice.

    PubMed

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  18. High Energy Density Physics and Exotic Acceleration Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowan, T.; /General Atomics, San Diego; Colby, E.

    2005-09-27

    The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And wemore » saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is

  19. Non-White, No More: Effect Coding as an Alternative to Dummy Coding with Implications for Higher Education Researchers

    ERIC Educational Resources Information Center

    Mayhew, Matthew J.; Simonoff, Jeffrey S.

    2015-01-01

    The purpose of this article is to describe effect coding as an alternative quantitative practice for analyzing and interpreting categorical, race-based independent variables in higher education research. Unlike indicator (dummy) codes that imply that one group will be a reference group, effect codes use average responses as a means for…

  20. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  1. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes

    PubMed Central

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation. PMID:28582389

  2. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes.

    PubMed

    Einkemmer, Lukas

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation.

  3. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  4. Designing a Dielectric Laser Accelerator on a Chip

    NASA Astrophysics Data System (ADS)

    Niedermayer, Uwe; Boine-Frankenheim, Oliver; Egenolf, Thilo

    2017-07-01

    Dielectric Laser Acceleration (DLA) achieves gradients of more than 1GeV/m, which are among the highest in non-plasma accelerators. The long-term goal of the ACHIP collaboration is to provide relativistic (>1 MeV) electrons by means of a laser driven microchip accelerator. Examples of ’’slightly resonant” dielectric structures showing gradients in the range of 70% of the incident laser field (1 GV/m) for electrons with beta=0.32 and 200% for beta=0.91 are presented. We demonstrate the bunching and acceleration of low energy electrons in dedicated ballistic buncher and velocity matched grating structures. However, the design gradient of 500 MeV/m leads to rapid defocusing. Therefore we present a scheme to bunch the beam in stages, which does not only reduce the energy spread, but also the transverse defocusing. The designs are made with a dedicated homemade 6D particle tracking code.

  5. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  6. Compact torus accelerator as a driver for ICF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tobin, M.T.; Meier, W.R.; Morse, E.C.

    1986-01-01

    The authors have carried out further investigations of the technical issues associated with using a compact torus (CT) accelerator as a driver for inertial confinement fusion (ICF). In a CT accelerator, a magnetically confined, torus-shaped plasma is compressed, accelerated, and focused by two concentric electrodes. After its initial formation, the torus shape is maintained for lifetimes exceeding 1 ms by inherent poloidal and toroidal currents. Hartman suggests acceleration and focusing of such a plasma ring will not cause dissolution within certain constraints. In this study, we evaluated a point design based on an available capacitor bank energy of 9.2 MJ.more » This accelerator, which was modeled by a zero-dimensional code, produces a xenon plasma ring with a 0.73-cm radius, a velocity of 4.14 x 10/sup 9/ cm/s, and a mass of 4.42 ..mu..g. The energy of the plasma ring as it leaves the accelerator is 3.8 MJ, or 41% of the capacitor bank energy. Our studies confirm the feasibility of producing a plasma ring with the characteristics required to induce fusion in an ICF target with a gain greater than 50. The low cost and high efficiency of the CT accelerator are particularly attractive. Uncertainties concerning propagation, accelerator lifetime, and power supply must be resolved to establish the viability of the accelerator as an ICF driver.« less

  7. STUDIES OF A FREE ELECTRON LASER DRIVEN BY A LASER-PLASMA ACCELERATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montgomery, A.; Schroeder, C.; Fawley, W.

    A free electron laser (FEL) uses an undulator, a set of alternating magnets producing a periodic magnetic fi eld, to stimulate emission of coherent radiation from a relativistic electron beam. The Lasers, Optical Accelerator Systems Integrated Studies (LOASIS) group at Lawrence Berkeley National Laboratory (LBNL) will use an innovative laserplasma wakefi eld accelerator to produce an electron beam to drive a proposed FEL. In order to optimize the FEL performance, the dependence on electron beam and undulator parameters must be understood. Numerical modeling of the FEL using the simulation code GINGER predicts the experimental results for given input parameters. Amongmore » the parameters studied were electron beam energy spread, emittance, and mismatch with the undulator focusing. Vacuum-chamber wakefi elds were also simulated to study their effect on FEL performance. Energy spread was found to be the most infl uential factor, with output FEL radiation power sharply decreasing for relative energy spreads greater than 0.33%. Vacuum chamber wakefi elds and beam mismatch had little effect on the simulated LOASIS FEL at the currents considered. This study concludes that continued improvement of the laser-plasma wakefi eld accelerator electron beam will allow the LOASIS FEL to operate in an optimal regime, producing high-quality XUV and x-ray pulses.« less

  8. Calculations of beam dynamics in Sandia linear electron accelerators, 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poukey, J.W.; Coleman, P.D.

    1985-03-01

    A number of code and analytic studies were made during 1984 which pertain to the Sandia linear accelerators MABE and RADLAC. In this report the authors summarize the important results of the calculations. New results include a better understanding of gap-induced radial oscillations, leakage currents in a typical MABE gas, emittance growth in a beam passing through a series of gaps, some new diocotron results, and the latest diode simulations for both accelerators. 23 references, 30 figures, 1 table.

  9. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  10. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less

  11. [Complexity level simulation in the German diagnosis-related groups system: the financial effect of coding of comorbidity diagnostics in urology].

    PubMed

    Wenke, A; Gaber, A; Hertle, L; Roeder, N; Pühse, G

    2012-07-01

    Precise and complete coding of diagnoses and procedures is of value for optimizing revenues within the German diagnosis-related groups (G-DRG) system. The implementation of effective structures for coding is cost-intensive. The aim of this study was to prove whether higher costs can be refunded by complete acquisition of comorbidities and complications. Calculations were based on DRG data of the Department of Urology, University Hospital of Münster, Germany, covering all patients treated in 2009. The data were regrouped and subjected to a process of simulation (increase and decrease of patient clinical complexity levels, PCCL) with the help of recently developed software. In urology a strong dependency of quantity and quality of coding of secondary diagnoses on PCCL and subsequent profits was found. Departmental budgetary procedures can be optimized when coding is effective. The new simulation tool can be a valuable aid to improve profits available for distribution. Nevertheless, calculation of time use and financial needs by this procedure are subject to specific departmental terms and conditions. Completeness of coding of (secondary) diagnoses must be the ultimate administrative goal of patient case documentation in urology.

  12. Sheath field dynamics from time-dependent acceleration of laser-generated positrons

    NASA Astrophysics Data System (ADS)

    Kerr, Shaun; Fedosejevs, Robert; Link, Anthony; Williams, Jackson; Park, Jaebum; Chen, Hui

    2017-10-01

    Positrons produced in ultraintense laser-matter interactions are accelerated by the sheath fields established by fast electrons, typically resulting in quasi-monoenergetic beams. Experimental results from OMEGA EP show higher order features developing in the positron spectra when the laser energy exceeds one kilojoule. 2D PIC simulations using the LSP code were performed to give insight into these spectral features. They suggest that for high laser energies multiple, distinct phases of acceleration can occur due to time-dependent sheath field acceleration. The detailed dynamics of positron acceleration will be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, and funded by LDRD 17-ERD-010.

  13. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  14. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  15. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  16. Inter-view prediction of intra mode decision for high-efficiency video coding-based multiview video coding

    NASA Astrophysics Data System (ADS)

    da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.

    2014-05-01

    Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.

  17. Ponderomotive Acceleration in Coronal Loops

    NASA Astrophysics Data System (ADS)

    Dahlburg, Russell B.; Laming, J. Martin; Taylor, Brian; Obenschain, Keith

    2017-08-01

    Ponderomotive acceleration has been asserted to be a cause of the First Ionization Potential (FIP) effect, the by now well known enhancement in abundance by a factor of 3-4 over photospheric values of elements in the solar corona with FIP less than about 10 eV. It is shown here by means of numerical simulations that ponderomotive acceleration occurs in solar coronal loops, with the appropriate magnitude and direction, as a ``byproduct'' of coronal heating. The numerical simulations are performed with the HYPERION code, which solves the fully compressible three-dimensional magnetohydrodynamic equations including nonlinear thermal conduction and optically thin radiation. Numerical simulations of a coronal loops with an axial magnetic field from 0.005 Teslas to 0.02 Teslas and lengths from 25000 km to 75000 km are presented. In the simulations the footpoints of the axial loop magnetic field are convected by random, large-scale motions. There is a continuous formation and dissipation of field-aligned current sheets which act to heat the loop. As a consequence of coronal magnetic reconnection, small scale, high speed jets form. The familiar vortex quadrupoles form at reconnection sites. Between the magnetic footpoints and the corona the reconnection flow merges with the boundary flow. It is in this region that the ponderomotive acceleration occurs. Mirroring the character of the coronal reconnection, the ponderomotive acceleration is also found to be intermittent.

  18. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence

  19. Generation of low-emittance electron beams in electrostatic accelerators for FEL applications

    NASA Astrophysics Data System (ADS)

    Chen, Teng; Elias, Luis R.

    1995-02-01

    This paper reports results of transverse emittance studies and beam propagation in electrostatic accelerators for free electron laser applications. In particular, we discuss emittance growth analysis of a low current electron beam system consisting of a miniature thermoionic electron gun and a National Electrostatics Accelerator (NEC) tube. The emittance growth phenomenon is discussed in terms of thermal effects in the electron gun cathode and aberrations produced by field gradient changes occurring inside the electron gun and throughout the accelerator tube. A method of reducing aberrations using a magnetic solenoidal field is described. Analysis of electron beam emittance was done with the EGUN code. Beam propagation along the accelerator tube was studied using a cylindrically symmetric beam envelope equation that included beam self-fields and the external accelerator fields which were derived from POISSON simulations.

  20. Particle Acceleration, Magnetic Field Generation in Relativistic Shocks

    NASA Technical Reports Server (NTRS)

    Nishikawa, Ken-Ichi; Hardee, P.; Hededal, C. B.; Richardson, G.; Sol, H.; Preece, R.; Fishman, G. J.

    2005-01-01

    Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.

  1. Noninvasive acceleration measurements to characterize knee arthritis and chondromalacia.

    PubMed

    Reddy, N P; Rothschild, B M; Mandal, M; Gupta, V; Suryanarayanan, S

    1995-01-01

    Devising techniques and instrumentation for early detection of knee arthritis and chondromalacia presents a challenge in the domain of biomedical engineering. The purpose of the present investigation was to characterize normal knees and knees affected by osteoarthritis, rheumatoid arthritis, and chondromalacia using a set of noninvasive acceleration measurements. Ultraminiature accelerometers were placed on the skin over the patella in four groups of subjects, and acceleration measurements were obtained during leg rotation. Acceleration measurements were significantly different in the four groups of subjects in the time and frequency domains. Power spectral analysis revealed that the average power was significantly different for these groups over a 100-500 Hz range. Noninvasive acceleration measurements can characterize the normal, arthritis, and chondromalacia knees. However, a study on a larger group of subjects is indicated.

  2. Potential loss of revenue due to errors in clinical coding during the implementation of the Malaysia diagnosis related group (MY-DRG®) Casemix system in a teaching hospital in Malaysia.

    PubMed

    Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed

    2018-01-25

    The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.

  3. Orientation to Language Code and Actions in Group Work

    ERIC Educational Resources Information Center

    Aline, David; Hosoda, Yuri

    2009-01-01

    This conversation analytic study reveals how learners themselves, as speakers and listeners, demonstrate their own orientation to language code and actions on a moment by moment basis during collaborative tasks in English as a foreign language classrooms. The excerpts presented in this article were drawn from 23 hours of audio- and video-recorded…

  4. Symplectic orbit and spin tracking code for all-electric storage rings

    NASA Astrophysics Data System (ADS)

    Talman, Richard M.; Talman, John D.

    2015-07-01

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring "trap." At the "magic" kinetic energy of 232.792 MeV, proton spins are "frozen," for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10-29e -cm or greater will produce a statistically significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial "symplectification"). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the

  5. Extrapolating Accelerated UV Weathering Data: Perspective From PVQAT Task Group 5 (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Annigoni, E.; Ballion, A.

    2015-02-01

    Taskgroup 5 (TG5) is concerned with a accelerated aging standard incorporating factors including ultraviolet radiation, temperature, and moisture. Separate experiments are being conducted in support of a test standard via the regional sub-groups in Asia, Europe, and the United States. The authors will describe the objectives and timeline for the TG5 interlaboratory study being directed out of the USA. Qualitative preliminary data from the experiment is presented. To date, the encapsulation transmittance experiment has: replicated behaviors of fielded materials (including specimen location- and formulation additive-specific discoloration); demonstrated coupling between UV aging and temperature; demonstrated that degradation in EVA results frommore » UV- aging; and obtained good qualitative comparison between Xe and UVA-340 sources for EVA. To date, the encapsulation adhesion experiment (using the compressive shear test to quantify strength of attachment) has demonstrated that attachment strength can decrease drastically (>50%) with age; however, early results suggest significant factor (UV, T, RH) dependence. Much remains to be learned about adhesion.« less

  6. Social-emotional characteristics of gifted accelerated and non-accelerated students in the Netherlands.

    PubMed

    Hoogeveen, Lianne; van Hell, Janet G; Verhoeven, Ludo

    2012-12-01

    In the studies of acceleration conducted so far a multidimensional perspective has largely been neglected. No attempt has been made to relate social-emotional characteristics of accelerated versus non-accelerated students in perspective of environmental factors. In this study, social-emotional characteristics of accelerated gifted students in the Netherlands were examined in relation to personal and environmental factors. Self-concept and social contacts of accelerated (n = 148) and non-accelerated (n = 55) gifted students, aged 4 to 27 (M = 11.22, SD = 4.27) were measured. Self-concept and social contacts of accelerated and non-accelerated gifted students were measured using a questionnaire and a diary, and parents of these students evaluated their behavioural characteristics. Gender and birth order were studied as personal factors and grade, classroom, teachers' gender, teaching experience, and the quality of parent-school contact as environmental factors. The results showed minimal differences in the social-emotional characteristics of accelerated and non-accelerated gifted students. The few differences we found favoured the accelerated students. We also found that multiple grade skipping does not have negative effects on social-emotional characteristics, and that long-term effects of acceleration tend to be positive. As regards the possible modulation of personal and environmental factors, we merely found an impact of such factors in the non-accelerated group. The results of this study strongly suggest that social-emotional characteristics of accelerated gifted students and non-accelerated gifted students are largely similar. These results thus do not support worries expressed by teachers about the acceleration of gifted students. Our findings parallel the outcomes of earlier studies in the United States and Germany in that we observed that acceleration does not harm gifted students, not even in the case of multiple grade skipping. On the contrary, there is a

  7. [Assessment of Coding in German Diagnosis Related Groups System in Otorhinolaryngology].

    PubMed

    Ellies, Maik; Anders, Berit; Seger, Wolfgang

    2018-05-14

    Prospective analysis of assessment reports in otorhinolaryngology for the period 01-03-2011 to 31-03-2017 by the Health Advisory Boards in Lower Saxony and Bremen, Germany in relation to coding in the G-DRG-System. The assessment reports were documented using a standardized database system developed on the basis of the electronic data exchange (DTA) by the Health Advisory Board in Lower Saxony. In addition, the documentation of the assessment reports according to the G-DRG system was used for assessment. Furthermore, the assessment of a case was evaluated once again on the basis of the present assessment documents and presented as an example in detail. During the period from 01-03-2011 to 31-03-2017, a total of 27,424 cases of inpatient assessments of DRGs according to the G-DRG system were collected in the field of otorhinolaryngology. In 7,259 cases, the DRG was changed, and in 20,175 cases, the suspicion of a DRG-relevant coding error was not justified in the review; thus, a DRG change rate of 26% of the assessments was identified over the time period investigated. There were different kinds of coding errors. In order to improve the coding quality in otorhinolaryngology, in addition to the special consideration of the presented "hit list" by the otorhinolaryngology departments, there should be more intensive cooperation between hospitals and the Health Advisory Boards of the federal states. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Ion beam accelerator system

    NASA Technical Reports Server (NTRS)

    Aston, G. (Inventor)

    1981-01-01

    A system is described that combines geometrical and electrostatic focusing to provide high ion extraction efficiency and good focusing of an accelerated ion beam. The apparatus includes a pair of curved extraction grids with multiple pairs of aligned holes positioned to direct a group of beamlets along converging paths. The extraction grids are closely spaced and maintained at a moderate potential to efficiently extract beamlets of ions and allow them to combine into a single beam. An accelerator electrode device downstream from the extraction grids is at a much lower potential than the grids to accelerate the combined beam. The application of the system to ion implantation is mentioned.

  9. Ion beam accelerator system

    NASA Technical Reports Server (NTRS)

    Aston, Graeme (Inventor)

    1984-01-01

    A system is described that combines geometrical and electrostatic focusing to provide high ion extraction efficiency and good focusing of an accelerated ion beam. The apparatus includes a pair of curved extraction grids (16, 18) with multiple pairs of aligned holes positioned to direct a group of beamlets (20) along converging paths. The extraction grids are closely spaced and maintained at a moderate potential to efficiently extract beamlets of ions and allow them to combine into a single beam (14). An accelerator electrode device (22) downstream from the extraction grids, is at a much lower potential than the grids to accelerate the combined beam.

  10. Product information representation for feature conversion and implementation of group technology automated coding

    NASA Astrophysics Data System (ADS)

    Medland, A. J.; Zhu, Guowang; Gao, Jian; Sun, Jian

    1996-03-01

    Feature conversion, also called feature transformation and feature mapping, is defined as the process of converting features from one view of an object to another view of the object. In a relatively simple implementation, for each application the design features are automatically converted into features specific for that application. All modifications have to be made via the design features. This is the approach that has attracted most attention until now. In the ideal situation, however, conversions directly from application views to the design view, and to other applications views, are also possible. In this paper, some difficulties faced in feature conversion are discussed. A new representation scheme of feature-based parts models has been proposed for the purpose of one-way feature conversion. The parts models consist of five different levels of abstraction, extending from an assembly level and its attributes, single parts and their attributes, single features and their attributes, one containing the geometric reference element and finally one for detailed geometry. One implementation of feature conversion for rotational components within GT (Group Technology) has already been undertaken using an automated coding procedure operating on a design-feature database. This database has been generated by a feature-based design system, and the GT coding scheme used in this paper is a specific scheme created for a textile machine manufacturing plant. Such feature conversion techniques presented here are only in their early stages of development and further research is underway.

  11. Study on radiation production in the charge stripping section of the RISP linear accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Oranj, Leila Mokhtari; Lee, Hee-Seock; Ko, Seung-Kook

    2015-02-01

    The linear accelerator of the Rare Isotope Science Project (RISP) accelerates 200 MeV/nucleon 238U ions in a multi-charge states. Many kinds of radiations are generated while the primary beam is transported along the beam line. The stripping process using thin carbon foil leads to complicated radiation environments at the 90-degree bending section. The charge distribution of 238U ions after the carbon charge stripper was calculated by using the LISE++ program. The estimates of the radiation environments were carried out by using the well-proved Monte Carlo codes PHITS and FLUKA. The tracks of 238U ions in various charge states were identified using the magnetic field subroutine of the PHITS code. The dose distribution caused by U beam losses for those tracks was obtained over the accelerator tunnel. A modified calculation was applied for tracking the multi-charged U beams because the fundamental idea of PHITS and FLUKA was to transport fully-ionized ion beam. In this study, the beam loss pattern after a stripping section was observed, and the radiation production by heavy ions was studied. Finally, the performance of the PHITS and the FLUKA codes was validated for estimating the radiation production at the stripping section by applying a modified method.

  12. Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    NASA Astrophysics Data System (ADS)

    Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan

    2014-03-01

    We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.

  13. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  14. Comparison of accelerated and conventional corneal collagen cross-linking for progressive keratoconus.

    PubMed

    Cınar, Yasin; Cingü, Abdullah Kürşat; Türkcü, Fatih Mehmet; Çınar, Tuba; Yüksel, Harun; Özkurt, Zeynep Gürsel; Çaça, Ihsan

    2014-09-01

    To compare outcomes of accelerated and conventional corneal cross-linking (CXL) for progressive keratoconus (KC). Patients were divided into two groups as the accelerated CXL group and the conventional CXL group. The uncorrected distant visual acuity (UDVA), corrected distant visual acuity (CDVA), refraction and keratometric values were measured preoperatively and postoperatively. The data of the two groups were compared statistically. The mean UDVA and CDVA were better at the six month postoperative when compared with preoperative values in two groups. While change in UDVA and CDVA was statistically significant in the accelerated CXL group (p = 0.035 and p = 0.047, respectively), it did not reach statistical significance in the conventional CXL group (p = 0.184 and p = 0.113, respectively). The decrease in the mean corneal power (Km) and maximum keratometric value (Kmax) were statistically significant in both groups (p = 0.012 and 0.046, respectively in the accelerated CXL group, p = 0.012 and 0.041, respectively, in the conventional CXL group). There was no statistically significant difference in visual and refractive results between the two groups (p > 0.05). Refractive and visual results of the accelerated CXL method and the conventional CXL method for the treatment of KC in short time period were similar. The accelerated CXL method faster and provide high throughput of the patients.

  15. New estimation method of neutron skyshine for a high-energy particle accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook

    2016-09-01

    A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.

  16. Applications of High Intensity Proton Accelerators

    NASA Astrophysics Data System (ADS)

    Raja, Rajendran; Mishra, Shekhar

    2010-06-01

    Superconducting radiofrequency linac development at Fermilab / S. D. Holmes -- Rare muon decay experiments / Y. Kuno -- Rare kaon decays / D. Bryman -- Muon collider / R. B. Palmer -- Neutrino factories / S. Geer -- ADS and its potential / J.-P. Revol -- ADS history in the USA / R. L. Sheffield and E. J. Pitcher -- Accelerator driven transmutation of waste: high power accelerator for the European ADS demonstrator / J. L. Biarrotte and T. Junquera -- Myrrha, technology development for the realisation of ADS in EU: current status & prospects for realisation / R. Fernandez ... [et al.] -- High intensity proton beam production with cyclotrons / J. Grillenberger and M. Seidel -- FFAG for high intensity proton accelerator / Y. Mori -- Kaon yields for 2 to 8 GeV proton beams / K. K. Gudima, N. V. Mokhov and S. I. Striganov -- Pion yield studies for proton driver beams of 2-8 GeV kinetic energy for stopped muon and low-energy muon decay experiments / S. I. Striganov -- J-Parc accelerator status and future plans / H. Kobayashi -- Simulation and verification of DPA in materials / N. V. Mokhov, I. L. Rakhno and S. I. Striganov -- Performance and operational experience of the CNGS facility / E. Gschwendtner -- Particle physics enabled with super-conducting RF technology - summary of working group 1 / D. Jaffe and R. Tschirhart -- Proton beam requirements for a neutrino factory and muon collider / M. S. Zisman -- Proton bunching options / R. B. Palmer -- CW SRF H linac as a proton driver for muon colliders and neutrino factories / M. Popovic, C. M. Ankenbrandt and R. P. Johnson -- Rapid cycling synchrotron option for Project X / W. Chou -- Linac-based proton driver for a neutrino factory / R. Garoby ... [et al.] -- Pion production for neutrino factories and muon colliders / N. V. Mokhov ... [et al.] -- Proton bunch compression strategies / V. Lebedev -- Accelerator test facility for muon collider and neutrino factory R&D / V. Shiltsev -- The superconducting RF linac for muon

  17. Analysis of secondary particle behavior in multiaperture, multigrid accelerator for the ITER neutral beam injector.

    PubMed

    Mizuno, T; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Watanabe, K; Dairaku, M; Sakamoto, K; Inoue, T

    2010-02-01

    Heat load on acceleration grids by secondary particles such as electrons, neutrals, and positive ions, is a key issue for long pulse acceleration of negative ion beams. Complicated behaviors of the secondary particles in multiaperture, multigrid (MAMuG) accelerator have been analyzed using electrostatic accelerator Monte Carlo code. The analytical result is compared to experimental one obtained in a long pulse operation of a MeV accelerator, of which second acceleration grid (A2G) was removed for simplification of structure. The analytical results show that relatively high heat load on the third acceleration grid (A3G) since stripped electrons were deposited mainly on A3G. This heat load on the A3G can be suppressed by installing the A2G. Thus, capability of MAMuG accelerator is demonstrated for suppression of heat load due to secondary particles by the intermediate grids.

  18. Symplectic orbit and spin tracking code for all-electric storage rings

    DOE PAGES

    Talman, Richard M.; Talman, John D.

    2015-07-22

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit

  19. Error Correcting Codes and Related Designs

    DTIC Science & Technology

    1990-09-30

    Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group

  20. Honoring Native American Code Talkers: The Road to the Code Talkers Recognition Act of 2008 (Public Law 110-420)

    ERIC Educational Resources Information Center

    Meadows, William C.

    2011-01-01

    Interest in North American Indian code talkers continues to increase. In addition to numerous works about the Navajo code talkers, several publications on other groups of Native American code talkers--including the Choctaw, Comanche, Hopi, Meskwaki, Canadian Cree--and about code talkers in general have appeared. This article chronicles recent…

  1. Comparison of a 3D multi‐group SN particle transport code with Monte Carlo for intercavitary brachytherapy of the cervix uteri

    PubMed Central

    Wareing, Todd A.; Failla, Gregory; Horton, John L.; Eifel, Patricia J.; Mourtada, Firas

    2009-01-01

    A patient dose distribution was calculated by a 3D multi‐group SN particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs‐137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi‐group SN particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within ±3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than ±1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs‐137 CT‐based patient geometry. Our data showed that a three‐group cross‐section set is adequate for Cs‐137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PACS number: 87.53.Jw

  2. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  3. What One Hundred Years of Research Says about the Effects of Ability Grouping and Acceleration on K-12 Students' Academic Achievement: Findings of Two Second-Order Meta-Analyses

    ERIC Educational Resources Information Center

    Steenbergen-Hu, Saiying; Makel, Matthew C.; Olszewski-Kubilius, Paula

    2016-01-01

    Two second-order meta-analyses synthesized approximately 100 years of research on the effects of ability grouping and acceleration on K-12 students' academic achievement. Outcomes of 13 ability grouping meta-analyses showed that students benefited from within-class grouping (0.19 = g = 0.30), cross-grade subject grouping (g = 0.26), and special…

  4. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  5. Embedded Streaming Deep Neural Networks Accelerator With Applications.

    PubMed

    Dundar, Aysegul; Jin, Jonghoon; Martini, Berin; Culurciello, Eugenio

    2017-07-01

    Deep convolutional neural networks (DCNNs) have become a very powerful tool in visual perception. DCNNs have applications in autonomous robots, security systems, mobile phones, and automobiles, where high throughput of the feedforward evaluation phase and power efficiency are important. Because of this increased usage, many field-programmable gate array (FPGA)-based accelerators have been proposed. In this paper, we present an optimized streaming method for DCNNs' hardware accelerator on an embedded platform. The streaming method acts as a compiler, transforming a high-level representation of DCNNs into operation codes to execute applications in a hardware accelerator. The proposed method utilizes maximum computational resources available based on a novel-scheduled routing topology that combines data reuse and data concatenation. It is tested with a hardware accelerator implemented on the Xilinx Kintex-7 XC7K325T FPGA. The system fully explores weight-level and node-level parallelizations of DCNNs and achieves a peak performance of 247 G-ops while consuming less than 4 W of power. We test our system with applications on object classification and object detection in real-world scenarios. Our results indicate high-performance efficiency, outperforming all other presented platforms while running these applications.

  6. Short-term memory coding in children with intellectual disabilities.

    PubMed

    Henry, Lucy

    2008-05-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.

  7. High spatial resolution measurements in a single stage ram accelerator

    NASA Technical Reports Server (NTRS)

    Hinkey, J. B.; Burnham, E. A.; Bruckner, A. P.

    1992-01-01

    High spatial resolution experimental tube wall pressure measurements of ram accelerator gas dynamic phenomena are presented in this paper. The ram accelerator is a ramjet-in-tube device which operates in a manner similar to that of a conventional ramjet. The projectile resembles the centerbody of a ramjet and travels supersonically through a tube filled with a combustible gaseous mixture, with the tube acting as the outer cowling. Pressure data are recorded as the projectile passes by sensors mounted in the tube wall at various locations along the tube. Utilization of special highly instrumented sections of tube has allowed the recording of gas dynamic phenomena with high resolution. High spatial resolution tube wall pressure data from the three regimes of propulsion studied to date (subdetonative, transdetonative, and superdetonative) in a single stage gas mixture are presented and reveal the three-dimensional character of the flow field induced by projectile fins and the canting of the fins and the canting of the projectile body relative to the tube wall. Also presented for comparison to the experimental data are calculations made with an inviscid, three-dimensional CFD code. The knowledge gained from these experiments and simulations is useful in understanding the underlying nature of ram accelerator propulsive regimes, as well as assisting in the validation of three-dimensional CFD coded which model unsteady, chemically reactive flows.

  8. The Los Alamos Laser Acceleration of Particles Workshop and beginning of the advanced accelerator concepts field

    NASA Astrophysics Data System (ADS)

    Joshi, C.

    2012-12-01

    The first Advanced Acceleration of Particles-AAC-Workshop (actually named Laser Acceleration of Particles Workshop) was held at Los Alamos in January 1982. The workshop lasted a week and divided all the acceleration techniques into four categories: near field, far field, media, and vacuum. Basic theorems of particle acceleration were postulated (later proven) and specific experiments based on the four categories were formulated. This landmark workshop led to the formation of the advanced accelerator R&D program in the HEP office of the DOE that supports advanced accelerator research to this day. Two major new user facilities at Argonne and Brookhaven and several more directed experimental efforts were built to explore the advanced particle acceleration schemes. It is not an exaggeration to say that the intellectual breadth and excitement provided by the many groups who entered this new field provided the needed vitality to then recently formed APS Division of Beams and the new online journal Physical Review Special Topics-Accelerators and Beams. On this 30th anniversary of the AAC Workshops, it is worthwhile to look back at the legacy of the first Workshop at Los Alamos and the fine groundwork it laid for the field of advanced accelerator concepts that continues to flourish to this day.

  9. Study of the transverse beam motion in the DARHT Phase II accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan; Fawley, W M; Houck, T L

    1998-08-20

    The accelerator for the second-axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will accelerate a 4-kA, 3-MeV, 2--µs long electron current pulse to 20 MeV. The energy variation of the beam within the flat-top portion of the current pulse is (plus or equal to) 0.5%. The performance of the DARHT Phase II radiographic machine requires the transverse beam motion to be much less than the beam spot size which is about 1.5 mm diameter on the x-ray converter. In general, the leading causes of the transverse beam motion in an accelerator are the beam breakup instability (BBU) andmore » the corkscrew motion. We have modeled the transverse beam motion in the DARHT Phase II accelerator with various magnetic tunes and accelerator cell configurations by using the BREAKUP code. The predicted sensitivity of corkscrew motion and BBU growth to different tuning algorithms will be presented.« less

  10. Distribution of the background gas in the MITICA accelerator

    NASA Astrophysics Data System (ADS)

    Sartori, E.; Dal Bello, S.; Serianni, G.; Sonato, P.

    2013-02-01

    MITICA is the ITER neutral beam test facility to be built in Padova for the generation of a 40A D- ion beam with a 16×5×16 array of 1280 beamlets accelerated to 1MV. The background gas pressure distribution and the particle flows inside MITICA accelerator are critical aspects for stripping losses, generation of secondary particles and beam non-uniformities. To keep the stripping losses in the extraction and acceleration stages reasonably low, the source pressure should be 0.3 Pa or less. The gas flow in MITICA accelerator is being studied using a 3D Finite Element code, named Avocado. The gas-wall interaction model is based on the cosine law, and the whole vacuum system geometry is represented by a view factor matrix based on surface discretization and gas property definitions. Pressure distribution and mutual fluxes are then solved linearly. In this paper the result of a numerical simulation is presented, showing the steady-state pressure distribution inside the accelerator when gas enters the system at room temperature. The accelerator model is limited to a horizontal slice 400 mm high (1/4 of the accelerator height). The pressure profile at solid walls and through the beamlet axis is obtained, allowing the evaluation and the discussion of the background gas distribution and nonuniformity. The particle flux at the inlet and outlet boundaries (namely the grounded grid apertures and the lateral conductances respectively) will be discussed.

  11. Acceleration modules in linear induction accelerators

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Heng; Deng, Jian-Jun

    2014-05-01

    The Linear Induction Accelerator (LIA) is a unique type of accelerator that is capable of accelerating kilo-Ampere charged particle current to tens of MeV energy. The present development of LIA in MHz bursting mode and the successful application into a synchrotron have broadened LIA's usage scope. Although the transformer model is widely used to explain the acceleration mechanism of LIAs, it is not appropriate to consider the induction electric field as the field which accelerates charged particles for many modern LIAs. We have examined the transition of the magnetic cores' functions during the LIA acceleration modules' evolution, distinguished transformer type and transmission line type LIA acceleration modules, and re-considered several related issues based on transmission line type LIA acceleration module. This clarified understanding should help in the further development and design of LIA acceleration modules.

  12. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  13. Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.

    2010-12-01

    The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.

  14. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  15. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  16. [Class III surgical patients facilitated by accelerated osteogenic orthodontic treatment].

    PubMed

    Wu, Jia-qi; Xu, Li; Liang, Cheng; Zou, Wei; Bai, Yun-yang; Jiang, Jiu-hui

    2013-10-01

    To evaluate the treatment time and the anterior and posterior teeth movement pattern as closing extraction space for the Class III surgical patients facilitated by accelerated osteogenic orthodontic treatment. There were 10 skeletal Class III patients in accelerated osteogenic orthodontic group (AOO) and 10 patients in control group. Upper first premolars were extracted in all patients. After leveling and alignment (T2), corticotomy was performed in the area of maxillary anterior teeth to accelerate space closing.Study models of upper dentition were taken before orthodontic treatment (T1) and after space closing (T3). All the casts were laser scanned, and the distances of the movement of incisors and molars were digitally measured. The distances of tooth movement in two groups were recorded and analyzed. The alignment time between two groups was not statistically significant. The treatment time in AOO group from T2 to T3 was less than that in the control group (less than 9.1 ± 4.1 months). The treatment time in AOO group from T1 to T3 was less than that in the control group (less than 6.3 ± 4.8 months), and the differences were significant (P < 0.01). Average distances of upper incisor movement (D1) in AOO group and control group were (2.89 ± 1.48) and (3.10 ± 0.95) mm, respectively. Average distances of upper first molar movement (D2) in AOO group and control group were (2.17 ± 1.13) and (2.45 ± 1.04) mm, respectively.No statistically significant difference was found between the two groups (P > 0.05). Accelerated osteogenic orthodontic treatment could accelerate space closing in Class III surgical patients and shorten preoperative orthodontic time. There were no influence on the movement pattern of anterior and posterior teeth during pre-surgical orthodontic treatment.

  17. Inductive and electrostatic acceleration in relativistic jet-plasma interactions.

    PubMed

    Ng, Johnny S T; Noble, Robert J

    2006-03-24

    We report on the observation of rapid particle acceleration in numerical simulations of relativistic jet-plasma interactions and discuss the underlying mechanisms. The dynamics of a charge-neutral, narrow, electron-positron jet propagating through an unmagnetized electron-ion plasma was investigated using a three-dimensional, electromagnetic, particle-in-cell computer code. The interaction excited magnetic filamentation as well as electrostatic plasma instabilities. In some cases, the longitudinal electric fields generated inductively and electrostatically reached the cold plasma-wave-breaking limit, and the longitudinal momentum of about half the positrons increased by 50% with a maximum gain exceeding a factor of 2 during the simulation period. Particle acceleration via these mechanisms occurred when the criteria for Weibel instability were satisfied.

  18. Activation assessment of the soil around the ESS accelerator tunnel

    NASA Astrophysics Data System (ADS)

    Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.; Ene, D.

    2018-06-01

    Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal directions. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order to estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.

  19. Activation Assessment of the Soil Around the ESS Accelerator Tunnel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.

    Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 different chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal direction. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order tomore » estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.« less

  20. Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Shocks

    NASA Technical Reports Server (NTRS)

    Nishikawa, Ken-IchiI.; Hededal, C.; Hardee, P.; Richardson, G.; Preece, R.; Sol, H.; Fishman, G.

    2004-01-01

    Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (m) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient parallel magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. New simulations with an ambient perpendicular magnetic field show the strong interaction between the relativistic jet and the magnetic fields. The magnetic fields are piled up by the jet and the jet electrons are bent, which creates currents and displacement currents. At the nonlinear stage, the magnetic fields are reversed by the current and the reconnection may take place. Due to these dynamics the jet and ambient electron are strongly accelerated in both parallel and perpendicular directions.

  1. Rate heterogeneity in six protein-coding genes from the holoparasite Balanophora (Balanophoraceae) and other taxa of Santalales

    PubMed Central

    Su, Huei-Jiun; Hu, Jer-Ming

    2012-01-01

    Background and Aims The holoparasitic flowering plant Balanophora displays extreme floral reduction and was previously found to have enormous rate acceleration in the nuclear 18S rDNA region. So far, it remains unclear whether non-ribosomal, protein-coding genes of Balanophora also evolve in an accelerated fashion and whether the genes with high substitution rates retain their functionality. To tackle these issues, six different genes were sequenced from two Balanophora species and their rate variation and expression patterns were examined. Methods Sequences including nuclear PI, euAP3, TM6, LFY and RPB2 and mitochondrial matR were determined from two Balanophora spp. and compared with selected hemiparasitic species of Santalales and autotrophic core eudicots. Gene expression was detected for the six protein-coding genes and the expression patterns of the three B-class genes (PI, AP3 and TM6) were further examined across different organs of B. laxiflora using RT-PCR. Key Results Balanophora mitochondrial matR is highly accelerated in both nonsynonymous (dN) and synonymous (dS) substitution rates, whereas the rate variation of nuclear genes LFY, PI, euAP3, TM6 and RPB2 are less dramatic. Significant dS increases were detected in Balanophora PI, TM6, RPB2 and dN accelerations in euAP3. All of the protein-coding genes are expressed in inflorescences, indicative of their functionality. PI is restrictively expressed in tepals, synandria and floral bracts, whereas AP3 and TM6 are widely expressed in both male and female inflorescences. Conclusions Despite the observation that rates of sequence evolution are generally higher in Balanophora than in hemiparasitic species of Santalales and autotrophic core eudicots, the five nuclear protein-coding genes are functional and are evolving at a much slower rate than 18S rDNA. The mechanism or mechanisms responsible for rapid sequence evolution and concomitant rate acceleration for 18S rDNA and matR are currently not well

  2. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers

    DOE PAGES

    Basu, Protonu; Williams, Samuel; Van Straalen, Brian; ...

    2017-04-05

    GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found inmore » many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.« less

  3. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, Protonu; Williams, Samuel; Van Straalen, Brian

    GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found inmore » many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.« less

  4. Exploring and Improving Student Engagement in an Accelerated Undergraduate Nursing Program through a Mentoring Partnership: An Action Research Study.

    PubMed

    Bramble, Marguerite; Maxwell, Hazel; Einboden, Rochelle; Farington, Sally; Say, Richard; Beh, Chin Liang; Stankiewicz, Grace; Munro, Graham; Marembo, Esther; Rickard, Greg

    2018-05-30

    This Participatory Action Research (PAR) project aimed to engage students from an accelerated 'fast track' nursing program in a mentoring collaboration, using an interdisciplinary partnership intervention with a group of academics. Student participants represented the disciplines of nursing and paramedicine with a high proportion of culturally and linguistically diverse (CALD) students. Nine student mentors were recruited and paired with academics for a three-month 'mentorship partnership' intervention. Data from two pre-intervention workshops and a post-intervention workshop were coded in NVivo11 using thematic analysis. Drawing on social inclusion theory, a qualitative analysis explored an iteration of themes across each action cycle. Emergent themes were: 1) 'building relationships for active engagement', 2) 'voicing cultural and social hierarchies', and 3) 'enacting collegiate community'. The study offers insights into issues for contemporary accelerated course delivery with a diverse student population and highlights future strategies to foster effective student engagement.

  5. Design of an electromagnetic accelerator for turbulent hydrodynamic mix studies

    NASA Astrophysics Data System (ADS)

    Susoeff, A. R.; Hawke, R. S.; Morrison, J. J.; Dimonte, G.; Remington, B. A.

    1993-12-01

    An electromagnetic accelerator in the form of a linear electric motor (LEM) has been designed to achieve controlled acceleration profiles of a carriage containing hydrodynamically unstable fluids for the investigation of the development of turbulent mix. The Rayleigh-Taylor instability is investigated by accelerating two dissimilar density fluids using the LEM to achieve a wide variety of acceleration and deceleration profiles. The acceleration profiles are achieved by independent control of rail and augmentation currents. A variety of acceleration-time profiles are possible including: (1) constant, (2) impulsive and (3) shaped. The LEM and support structure are a robust design in order to withstand high loads with deflections and to mitigate operational vibration. Vibration of the carriage during acceleration could create artifacts in the data which would interfere with the intended study of the Rayleigh-Taylor instability. The design allows clear access for diagnostic techniques such as laser induced fluorescence radiography, shadowgraphs and particle imaging velocimetry. Electromagnetic modeling codes were used to optimize the rail and augmentation coil positions within the support structure framework. Results of contemporary studies for non-arcing sliding contact of solid armatures are used for the design of the driving armature and the dynamic electromagnetic braking system. A 0.6MJ electrolytic capacitor bank is used for energy storage to drive the LEM. This report will discuss a LEM design which will accelerate masses of up to 3kg to a maximum of about 3000g(sub o), where g(sub o) is accelerated due to gravity.

  6. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  7. Particle acceleration, magnetic field generation, and emission in relativistic pair jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C.; Kouveliotou, C.; Fishman, G. J.; Mizuno, Y.

    2005-01-01

    Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Recent simulations show that the Weibel instability created by relativistic pair jets is responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet propagating through an ambient plasma with and without initial magnetic fields. The growth rates of the Weibel instability depends on the distribution of pair jets. The Weibel instability created in the collisionless shock accelerates particles perpendicular and parallel to the jet propagation direction. This instability is also responsible for generating and amplifying highly nonuniform, small-scale magnetic fields, which contribute to the electron s transverse deflection behind the jet head. The jitter radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.

  8. Cryogenic distribution box for Fermi National Accelerator Laboratory

    NASA Astrophysics Data System (ADS)

    Svehla, M. R.; Bonnema, E. C.; Cunningham, E. K.

    2017-12-01

    Meyer Tool & Mfg., Inc (Meyer Tool) of Oak Lawn, Illinois is manufacturing a cryogenic distribution box for Fermi National Accelerator Laboratory (FNAL). The distribution box will be used for the Muon-to-electron conversion (Mu2e) experiment. The box includes twenty-seven cryogenic valves, two heat exchangers, a thermal shield, and an internal nitrogen separator vessel, all contained within a six-foot diameter ASME coded vacuum vessel. This paper discusses the design and manufacturing processes that were implemented to meet the unique fabrication requirements of this distribution box. Design and manufacturing features discussed include: 1) Thermal strap design and fabrication, 2) Evolution of piping connections to heat exchangers, 3) Nitrogen phase separator design, 4) ASME code design of vacuum vessel, and 5) Cryogenic valve installation.

  9. Direct measurement of the image displacement instability in a linear induction accelerator

    NASA Astrophysics Data System (ADS)

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    2017-06-01

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, it appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. This becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.

  10. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  11. Maturation profile of inferior olivary neurons expressing ionotropic glutamate receptors in rats: role in coding linear accelerations.

    PubMed

    Li, Chuan; Han, Lei; Ma, Chun-Wai; Lai, Suk-King; Lai, Chun-Hong; Shum, Daisy Kwok Yan; Chan, Ying-Shing

    2013-07-01

    Using sinusoidal oscillations of linear acceleration along both the horizontal and vertical planes to stimulate otolith organs in the inner ear, we charted the postnatal time at which responsive neurons in the rat inferior olive (IO) first showed Fos expression, an indicator of neuronal recruitment into the otolith circuit. Neurons in subnucleus dorsomedial cell column (DMCC) were activated by vertical stimulation as early as P9 and by horizontal (interaural) stimulation as early as P11. By P13, neurons in the β subnucleus of IO (IOβ) became responsive to horizontal stimulation along the interaural and antero-posterior directions. By P21, neurons in the rostral IOβ became also responsive to vertical stimulation, but those in the caudal IOβ remained responsive only to horizontal stimulation. Nearly all functionally activated neurons in DMCC and IOβ were immunopositive for the NR1 subunit of the NMDA receptor and the GluR2/3 subunit of the AMPA receptor. In situ hybridization studies further indicated abundant mRNA signals of the glutamate receptor subunits by the end of the second postnatal week. This is reinforced by whole-cell patch-clamp data in which glutamate receptor-mediated miniature excitatory postsynaptic currents of rostral IOβ neurons showed postnatal increase in amplitude, reaching the adult level by P14. Further, these neurons exhibited subthreshold oscillations in membrane potential as from P14. Taken together, our results support that ionotropic glutamate receptors in the IO enable postnatal coding of gravity-related information and that the rostral IOβ is the only IO subnucleus that encodes spatial orientations in 3-D.

  12. Quantum mechanics in noninertial reference frames: Relativistic accelerations and fictitious forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klink, W.H., E-mail: william-klink@uiowa.edu; Wickramasekara, S., E-mail: wickrama@grinnell.edu

    2016-06-15

    One-particle systems in relativistically accelerating reference frames can be associated with a class of unitary representations of the group of arbitrary coordinate transformations, an extension of the Wigner–Bargmann definition of particles as the physical realization of unitary irreducible representations of the Poincaré group. Representations of the group of arbitrary coordinate transformations become necessary to define unitary operators implementing relativistic acceleration transformations in quantum theory because, unlike in the Galilean case, the relativistic acceleration transformations do not themselves form a group. The momentum operators that follow from these representations show how the fictitious forces in noninertial reference frames are generated inmore » quantum theory.« less

  13. Quantum mechanics in noninertial reference frames: Relativistic accelerations and fictitious forces

    NASA Astrophysics Data System (ADS)

    Klink, W. H.; Wickramasekara, S.

    2016-06-01

    One-particle systems in relativistically accelerating reference frames can be associated with a class of unitary representations of the group of arbitrary coordinate transformations, an extension of the Wigner-Bargmann definition of particles as the physical realization of unitary irreducible representations of the Poincaré group. Representations of the group of arbitrary coordinate transformations become necessary to define unitary operators implementing relativistic acceleration transformations in quantum theory because, unlike in the Galilean case, the relativistic acceleration transformations do not themselves form a group. The momentum operators that follow from these representations show how the fictitious forces in noninertial reference frames are generated in quantum theory.

  14. Energetic properties' investigation of removing flattening filter at phantom surface: Monte Carlo study using BEAMnrc code, DOSXYZnrc code and BEAMDP code

    NASA Astrophysics Data System (ADS)

    Bencheikh, Mohamed; Maghnouj, Abdelmajid; Tajmouati, Jaouad

    2017-11-01

    The Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy and beam characterization investigation, in this study, the Varian Clinac 2100 medical linear accelerator with and without flattening filter (FF) was modelled. The objective of this study was to determine flattening filter impact on particles' energy properties at phantom surface in terms of energy fluence, mean energy, and energy fluence distribution. The Monte Carlo codes used in this study were BEAMnrc code for simulating linac head, DOSXYZnrc code for simulating the absorbed dose in a water phantom, and BEAMDP for extracting energy properties. Field size was 10 × 10 cm2, simulated photon beam energy was 6 MV and SSD was 100 cm. The Monte Carlo geometry was validated by a gamma index acceptance rate of 99% in PDD and 98% in dose profiles, gamma criteria was 3% for dose difference and 3mm for distance to agreement. In without-FF, the energetic properties was as following: electron contribution was increased by more than 300% in energy fluence, almost 14% in mean energy and 1900% in energy fluence distribution, however, photon contribution was increased 50% in energy fluence, and almost 18% in mean energy and almost 35% in energy fluence distribution. The removing flattening filter promotes the increasing of electron contamination energy versus photon energy; our study can contribute in the evolution of removing flattening filter configuration in future linac.

  15. Recent advances in lossless coding techniques

    NASA Astrophysics Data System (ADS)

    Yovanof, Gregory S.

    Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.

  16. Analysis of Movement Acceleration of Down's Syndrome Teenagers Playing Computer Games.

    PubMed

    Carrogi-Vianna, Daniela; Lopes, Paulo Batista; Cymrot, Raquel; Hengles Almeida, Jefferson Jesus; Yazaki, Marcos Lomonaco; Blascovi-Assis, Silvana Maria

    2017-12-01

    This study aimed to evaluate movement acceleration characteristics in adolescents with Down syndrome (DS) and typical development (TD), while playing bowling and golf videogames on the Nintendo ® Wii™. The sample comprised 21 adolescents diagnosed with DS and 33 with TD of both sexes, between 10 and 14 years of age. The arm swing accelerations of the dominant upper limb were collected as measures during the bowling and the golf games. The first valid measurement, verified by the software readings, recorded at the start of each of the games, was used in the analysis. In the bowling game, the groups presented significant statistical differences, with the maximum (M) peaks of acceleration for the Male Control Group (MCG) (M = 70.37) and Female Control Group (FCG) (M = 70.51) when compared with Male Down Syndrome Group (MDSG) (M = 45.33) and Female Down Syndrome Group (FDSG) (M = 37.24). In the golf game the groups also presented significant statistical differences, the only difference being that the maximum peaks of acceleration for both male groups were superior compared with the female groups, MCG (M = 74.80) and FCG (M = 56.80), as well as in MDSG (M = 45.12) and in FDSG (M = 30.52). It was possible to use accelerometry to evaluate the movement acceleration characteristics of teenagers diagnosed with DS during virtual bowling and golf games played on the Nintendo Wii console.

  17. Accelerator system and method of accelerating particles

    NASA Technical Reports Server (NTRS)

    Wirz, Richard E. (Inventor)

    2010-01-01

    An accelerator system and method that utilize dust as the primary mass flux for generating thrust are provided. The accelerator system can include an accelerator capable of operating in a self-neutralizing mode and having a discharge chamber and at least one ionizer capable of charging dust particles. The system can also include a dust particle feeder that is capable of introducing the dust particles into the accelerator. By applying a pulsed positive and negative charge voltage to the accelerator, the charged dust particles can be accelerated thereby generating thrust and neutralizing the accelerator system.

  18. Accelerated and accentuated neurocognitive aging in HIV infection.

    PubMed

    Sheppard, David P; Iudicello, Jennifer E; Morgan, Erin E; Kamat, Rujvi; Clark, Lindsay R; Avci, Gunes; Bondi, Mark W; Woods, Steven Paul

    2017-06-01

    There is debate as to whether the neurocognitive changes associated with HIV infection represent an acceleration of the typical aging process or more simply reflect a greater accentuated risk for age-related declines. We aimed to determine whether accelerated neurocognitive aging is observable in a sample of older HIV-infected individuals compared to age-matched seronegatives and older old (i.e., aged ≥65) seronegative adults. Participants in a cross-sectional design included 48 HIV-seronegative (O-) and 40 HIV-positive (O+) participants between the ages of 50-65 (mean ages = 55 and 56, respectively) and 40 HIV-seronegative participants aged ≥65 (OO-; mean age = 74) who were comparable for other demographics. All participants were administered a brief neurocognitive battery of attention, episodic memory, speeded executive functions, and confrontation naming (i.e., Boston Naming Test). The O+ group performed more poorly than the O- group (i.e., accentuated aging), but not differently from the OO- on digit span and initial recall of a supraspan word list, consistent with an accelerating aging profile. However, the O+ group's performance was comparable to the O- group on all other neurocognitive tests (ps > 0.05). These data partially support a model of accelerated neurocognitive aging in HIV infection, which was observed in the domain of auditory verbal attention, but not in the areas of memory, language, or speeded executive functions. Future studies should examine whether HIV-infected adults over 65 evidence accelerated aging in downstream neurocognitive domains and subsequent everyday functioning outcomes.

  19. Group living accelerates bed bug (Hemiptera: Cimicidae) development.

    PubMed

    Saenz, Virna L; Santangelo, Richard G; Vargo, Edward L; Schal, Coby

    2014-01-01

    For many insect species, group living provides physiological and behavioral benefits, including faster development. Bed bugs (Cimex lectularius L.) live in aggregations composed of eggs, nymphs, and adults of various ages. Our aim was to determine whether bed bug nymphs reared in groups develop faster than solitary nymphs. We reared first instars either in isolation or in groups from hatching to adult emergence and recorded their development time. In addition, we investigated the effects of group housing on same-age nymphs versus nymphs reared with adults. Nymphal development was 2.2 d faster in grouped nymphs than in solitary-housed nymphs, representing 7.3% faster overall development. However, this grouping effect did not appear to be influenced by group composition. Thus, similar to other gregarious insect species, nymph development in bed bugs is faster in aggregations than in isolation.

  20. A GPL Relativistic Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Olvera, D.; Mendoza, S.

    We are currently building a free (in the sense of a GNU GPL license) 2DRHD code in order to be used for different astrophysical situations. Our final target will be to include strong gravitational fields and magnetic fields. We intend to form a large group of developers as it is usually done for GPL codes.

  1. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  2. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  3. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  4. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    NASA Astrophysics Data System (ADS)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  5. Are Registration of Disease Codes for Adult Anaphylaxis Accurate in the Emergency Department?

    PubMed Central

    Choi, Byungho; Lee, Hyeji

    2018-01-01

    Purpose There has been active research on anaphylaxis, but many study subjects are limited to patients registered with anaphylaxis codes. However, anaphylaxis codes tend to be underused. The aim of this study was to investigate the accuracy of anaphylaxis code registration and the clinical characteristics of accurate and inaccurate anaphylaxis registration in anaphylactic patients. Methods This retrospective study evaluated the medical records of adult patients who visited the university hospital emergency department between 2012 and 2016. The study subjects were divided into the groups with accurate and inaccurate anaphylaxis codes registered under anaphylaxis and other allergy-related codes and symptom-related codes, respectively. Results Among 211,486 patients, 618 (0.29%) had anaphylaxis. Of these, 161 and 457 were assigned to the accurate and inaccurate coding groups, respectively. The average age, transportation to the emergency department, past anaphylaxis history, cancer history, and the cause of anaphylaxis differed between the 2 groups. Cutaneous symptom manifested more frequently in the inaccurate coding group, while cardiovascular and neurologic symptoms were more frequently observed in the accurate group. Severe symptoms and non-alert consciousness were more common in the accurate group. Oxygen supply, intubation, and epinephrine were more commonly used as treatments for anaphylaxis in the accurate group. Anaphylactic patients with cardiovascular symptoms, severe symptoms, and epinephrine use were more likely to be accurately registered with anaphylaxis disease codes. Conclusions In case of anaphylaxis, more patients were registered inaccurately under other allergy-related codes and symptom-related codes rather than accurately under anaphylaxis disease codes. Cardiovascular symptoms, severe symptoms, and epinephrine treatment were factors associated with accurate registration with anaphylaxis disease codes in patients with anaphylaxis. PMID:29411554

  6. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  7. Method of accelerating photons by a relativistic plasma wave

    DOEpatents

    Dawson, John M.; Wilks, Scott C.

    1990-01-01

    Photons of a laser pulse have their group velocity accelerated in a plasma as they are placed on a downward density gradient of a plasma wave of which the phase velocity nearly matches the group velocity of the photons. This acceleration results in a frequency upshift. If the unperturbed plasma has a slight density gradient in the direction of propagation, the photon frequencies can be continuously upshifted to significantly greater values.

  8. A boundary-Fitted Coordinate Code for General Two-Dimensional Regions with Obstacles and Boundary Intrusions.

    DTIC Science & Technology

    1983-03-01

    values of these functions on the two sides of the slits. The acceleration parameters for the iteration at each point are in the field array WACC (I,J...code will calculate a locally optimum value at each point in the field, these values being placed in the field array WACC . This calculation is...changes in x and y, are calculated by calling subroutine ERROR.) The acceleration parameter is placed in the field 65 array WACC . The addition to the

  9. Kinetic Simulations of Plasma Energization and Particle Acceleration in Interacting Magnetic Flux Ropes

    NASA Astrophysics Data System (ADS)

    Du, S.; Guo, F.; Zank, G. P.; Li, X.; Stanier, A.

    2017-12-01

    The interaction between magnetic flux ropes has been suggested as a process that leads to efficient plasma energization and particle acceleration (e.g., Drake et al. 2013; Zank et al. 2014). However, the underlying plasma dynamics and acceleration mechanisms are subject to examination of numerical simulations. As a first step of this effort, we carry out 2D fully kinetic simulations using the VPIC code to study the plasma energization and particle acceleration during coalescence of two magnetic flux ropes. Our analysis shows that the reconnection electric field and compression effect are important in plasma energization. The results may help understand the energization process associated with magnetic flux ropes frequently observed in the solar wind near the heliospheric current sheet.

  10. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  11. Numerical simulations of the superdetonative ram accelerator combusting flow field

    NASA Technical Reports Server (NTRS)

    Soetrisno, Moeljo; Imlay, Scott T.; Roberts, Donald W.

    1993-01-01

    The effects of projectile canting and fins on the ram accelerator combusting flowfield and the possible cause of the ram accelerator unstart are investigated by performing axisymmetric, two-dimensional, and three-dimensional calculations. Calculations are performed using the INCA code for solving Navier-Stokes equations and a guasi-global combustion model of Westbrook and Dryer (1981, 1984), which includes N2 and nine reacting species (CH4, CO, CO2, H2, H, O2, O, OH, and H2O), which are allowed to undergo a 12-step reaction. It is found that, without canting, interactions between the fins, boundary layers, and combustion fronts are insufficient to unstart the projectile at superdetonative velocities. With canting, the projectile will unstart at flow conditions where it appears to accelerate without canting. Unstart occurs at some critical canting angle. It is also found that three-dimensionality plays an important role in the overall combustion process.

  12. Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Pair Jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C.; Mizuno, Y.

    2005-01-01

    Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created by relativistic pair jets are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet propagating through an ambient plasma with and without initial magnetic fields. The growth rates of the Weibel instability depends on the distribution of pair jets. Simulations show that the Weibel instability created in the collisionless shock accelerates particles perpendicular and parallel to the jet propagation direction. The simulation results show that this instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields, which contribute to the electron's transverse deflection behind the jet head. The "jitter" radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.

  13. Particle Acceleration, Magnetic Field Generation, and Emission in Relativistic Pair Jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K. I.; Hardee, P.; Hededal, C. B.; Richardson, G.; Sol, H.; Preece, R.; Fishman, G. J.

    2004-01-01

    Shock acceleration is a ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., Buneman, Weibel and other two-stream instabilities) created in collisionless shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating into an ambient plasma. We find that the growth times depend on the Lorenz factors of jets. The jets with larger Lorenz factors grow slower. Simulations show that the Weibel instability created in the collisionless shock front accelerates jet and ambient particles both perpendicular and parallel to the jet propagation direction. The small scale magnetic field structure generated by the Weibel instability is appropriate to the generation of "jitter" radiation from deflected electrons (positrons) as opposed to synchrotron radiation. The jitter radiation resulting from small scale magnetic field structures may be important for understanding the complex time structure and spectral evolution observed in gamma-ray bursts or other astrophysical sources containing relativistic jets and relativistic collisionless shocks.

  14. Post-accelerator issues at the IsoSpin Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chattopadhyay, S.; Nitschke, J.M.

    1994-05-01

    The workshop on ``Post-Accelerator Issues at the Isospin Laboratory`` was held at the Lawrence Berkeley Laboratory from October 27--29, 1993. It was sponsored by the Center for Beam Physics in the Accelerator and Fusion Research Division and the ISL Studies Group in the Nuclear Science Division. About forty scientists from around the world participated vigorously in this two and a half day workshop, (c.f. Agenda, Appendix D). Following various invited review talks from leading practitioners in the field on the first day, the workshop focussed around two working groups: (1) the Ion Source and Separators working group and (2) themore » Radio Frequency Quadrupoles and Linacs working group. The workshop closed with the two working groups summarizing and outlining the tasks for the future. This report documents the proceedings of the workshop and includes the invited review talks, the two summary talks from the working groups and individual contributions from the participants. It is a complete assemblage of state-of-the-art thinking on ion sources, low-{beta}, low(q/A) accelerating structures, e.g. linacs and RFQS, isobar separators, phase-space matching, cyclotrons, etc., as relevant to radioactive beam facilities and the IsoSpin Laboratory. We regret to say that while the fascinating topic of superconducting low-velocity accelerator structure was covered by Dr. K. Shepard during the workshop, we can only reproduce the copies of the transparencies of his talk in the Appendix, since no written manuscript was available at the time of publication of this report. The individual report have been catologed separately elsewhere.« less

  15. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  16. Overview of the H.264/AVC video coding standard

    NASA Astrophysics Data System (ADS)

    Luthra, Ajay; Topiwala, Pankaj N.

    2003-11-01

    H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.

  17. The fourfold way of the genetic code.

    PubMed

    Jiménez-Montaño, Miguel Angel

    2009-11-01

    We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.

  18. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  19. NESSY: NLTE spectral synthesis code for solar and stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.

    2017-07-01

    Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.

  20. New methods in WARP, a particle-in-cell code for space-charge dominated beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D., LLNL

    1998-01-12

    The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less

  1. GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac

    2017-03-01

    The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.

  2. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    PubMed

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  3. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  4. Secondary electron emission from plasma processed accelerating cavity grade niobium

    NASA Astrophysics Data System (ADS)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for higher energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were treated

  5. Secondary Electron Emission from Plasma Processed Accelerating Cavity Grade Niobium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for highermore » energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were

  6. Laser beam coupling with capillary discharge plasma for laser wakefield acceleration applications

    NASA Astrophysics Data System (ADS)

    Bagdasarov, G. A.; Sasorov, P. V.; Gasilov, V. A.; Boldarev, A. S.; Olkhovskaya, O. G.; Benedetti, C.; Bulanov, S. S.; Gonsalves, A.; Mao, H.-S.; Schroeder, C. B.; van Tilborg, J.; Esarey, E.; Leemans, W. P.; Levato, T.; Margarone, D.; Korn, G.

    2017-08-01

    One of the most robust methods, demonstrated to date, of accelerating electron beams by laser-plasma sources is the utilization of plasma channels generated by the capillary discharges. Although the spatial structure of the installation is simple in principle, there may be some important effects caused by the open ends of the capillary, by the supplying channels etc., which require a detailed 3D modeling of the processes. In the present work, such simulations are performed using the code MARPLE. First, the process of capillary filling with cold hydrogen before the discharge is fired, through the side supply channels is simulated. Second, the simulation of the capillary discharge is performed with the goal to obtain a time-dependent spatial distribution of the electron density near the open ends of the capillary as well as inside the capillary. Finally, to evaluate the effectiveness of the beam coupling with the channeling plasma wave guide and of the electron acceleration, modeling of the laser-plasma interaction was performed with the code INF&RNO.

  7. Direct measurement of the image displacement instability in a linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, itmore » appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. Finally, this becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.« less

  8. Direct measurement of the image displacement instability in a linear induction accelerator

    DOE PAGES

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    2017-06-19

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, itmore » appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. Finally, this becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.« less

  9. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  10. Particle Acceleration and Radiation associated with Magnetic Field Generation from Relativistic Collisionless Shocks

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.; Hardee, P. E.; Richardson, G. A.; Preece, R. D.; Sol, H.; Fishman, G. J.

    2003-01-01

    Shock acceleration is an ubiquitous phenomenon in astrophysical plasmas. Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic jet front propagating through an ambient plasma with and without initial magnetic fields. We find only small differences in the results between no ambient and weak ambient magnetic fields. Simulations show that the Weibel instability created in the collisionless shock front accelerates particles perpendicular and parallel to the jet propagation direction. While some Fermi acceleration may occur at the jet front, the majority of electron acceleration takes place behind the jet front and cannot be characterized as Fermi acceleration. The simulation results show that this instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields, which contribute to the electron s transverse deflection behind the jet head. The "jitter" radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.

  11. Single event effects in high-energy accelerators

    NASA Astrophysics Data System (ADS)

    García Alía, Rubén; Brugger, Markus; Danzeca, Salvatore; Cerutti, Francesco; de Carvalho Saraiva, Joao Pedro; Denz, Reiner; Ferrari, Alfredo; Foro, Lionel L.; Peronnard, Paul; Røed, Ketil; Secondo, Raffaello; Steckert, Jens; Thurel, Yves; Toccafondo, Iacocpo; Uznanski, Slawosz

    2017-03-01

    The radiation environment encountered at high-energy hadron accelerators strongly differs from the environment relevant for space applications. The mixed-field expected at modern accelerators is composed of charged and neutral hadrons (protons, pions, kaons and neutrons), photons, electrons, positrons and muons, ranging from very low (thermal) energies up to the TeV range. This complex field, which is extensively simulated by Monte Carlo codes (e.g. FLUKA) is due to beam losses in the experimental areas, distributed along the machine (e.g. collimation points) and deriving from the interaction with the residual gas inside the beam pipe. The resulting intensity, energy distribution and proportion of the different particles largely depends on the distance and angle with respect to the interaction point as well as the amount of installed shielding material. Electronics operating in the vicinity of the accelerator will therefore be subject to both cumulative damage from radiation (total ionizing dose, displacement damage) as well as single event effects which can seriously compromise the operation of the machine. This, combined with the extensive use of commercial-off-the-shelf components due to budget, performance and availability reasons, results in the need to carefully characterize the response of the devices and systems to representative radiation conditions.

  12. Numerical investigation on the effects of acceleration reversal times in Rayleigh-Taylor Instability with multiple reversals

    NASA Astrophysics Data System (ADS)

    Farley, Zachary; Aslangil, Denis; Banerjee, Arindam; Lawrie, Andrew G. W.

    2017-11-01

    An implicit large eddy simulation (ILES) code, MOBILE, is used to explore the growth rate of the mixing layer width of the acceleration-driven Rayleigh-Taylor instability (RTI) under variable acceleration histories. The sets of computations performed consist of a series of accel-decel-accel (ADA) cases in addition to baseline constant acceleration and accel-decel (AD) cases. The ADA cases are a series of varied times for the second acceleration reversal (t2) and show drastic differences in the growth rates. Upon the deceleration phase, the kinetic energy of the flow is shifted into internal wavelike patterns. These waves are evidenced by the examined differences in growth rate in the second acceleration phase for the set of ADA cases. Here, we investigate global parameters that include mixing width, growth rates and the anisotropy tensor for the kinetic energy to better understand the behavior of the growth during the re-acceleration period. Authors acknowledge financial support from DOE-SSAA (DE-NA0003195) and NSF CAREER (#1453056) awards.

  13. Characteristics of Four SPE Classes According to Onset Timing and Proton Acceleration Patterns

    NASA Astrophysics Data System (ADS)

    Kim, Roksoon

    2015-04-01

    In our previous work (Kim et al., 2015), we suggested a new classification scheme, which categorizes the SPEs into four groups based on association with flare or CME inferred from onset timings as well as proton acceleration patterns using multienergy observations. In this study, we have tried to find whether there are any typical characteristics of associated events and acceleration sites in each group using 42 SPEs from 1997 to 2012. We find: (i) if the proton acceleration starts from a lower energy, a SPE has a higher chance to be a strong event (> 5000 pfu) even if the associated flare and CME are not so strong. The only difference between the SPEs associated with flare and CME is the location of the acceleration site. For the former, the sites are very low ( ~1 Rs) and close to the western limb, while the latter has a relatively higher (mean=6.05 Rs) and wider acceleration sites. (ii) When the proton acceleration starts from the higher energy, a SPE tends to be a relatively weak event (< 1000 pfu), in spite of its associated CME is relatively stronger than previous group. (iii) The SPEs categorized by the simultaneous proton acceleration in whole energy range within 10 minutes, tend to show the weakest proton flux (mean=327 pfu) in spite of strong related eruptions. Their acceleration heights are very close to the locations of type II radio bursts. Based on those results, we suggest that the different characteristics of the four groups are mainly due to the different mechanisms governing the acceleration pattern and interval, and different condition such as the acceleration location.

  14. The accelerated residency program: the Marshall University family practice 9-year experience.

    PubMed

    Petrany, Stephen M; Crespo, Richard

    2002-10-01

    In 1989, the American Board of Family Practice (ABFP) approved the first of 12 accelerated residency programs in family practice. These experimental programs provide a 1-year experience for select medical students that combines the requirements of the fourth year of medical school with those of the first year of residency, reducing the total training time by 1 year. This paper reports on the achievements and limitations of the Marshall University accelerated residency program over a 9-year period that began in 1992. Several parameters have been monitored since the inception of the accelerated program and provide the basis for comparison of accelerated and traditional residents. These include initial resident characteristics, performance outcomes, and practice choices. A total of 16 students were accepted into the accelerated track from 1992 through 1998. During the same time period, 44 residents entered the traditional residency program. Accelerated resident tended to be older and had more career experience than their traditional counterparts. As a group, the accelerated residents scored an average of 30 points higher on the final in-training exams provided by the ABFP. All residents in both groups remained at Marshall to complete the full residency training experience, and all those who have taken the ABFP certifying exam have passed. Accelerated residents were more likely to practice in West Virginia, consistent with one of the initial goals for the program. In addition, accelerated residents were more likely to be elected chief resident and choose an academic career than those in the traditional group. Both groups opted for small town or rural practice equally. The Marshall University family practice 9-year experience with the accelerated residency track demonstrates that for carefully selected candidates, the program can provide an overall shortened path to board certification and attract students who excel academically and have high leadership potential

  15. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  16. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  17. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  18. Particle acceleration magnetic field generation, and emission in Relativistic pair jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C.; Kouveliotou, C.; Fishman, G. J.

    2005-01-01

    Plasma waves and their associated instabilities (e.g., the Buneman instability, two-streaming instability, and the Weibel instability) are responsible for particle acceleration in relativistic pair jets. Using a 3-D relativistic electromagnetic particle (REMP) code, we have investigated particle acceleration associated with a relativistic pair jet propagating through a pair plasma. Simulations show that the Weibel instability created in the collisionless shock accelerates particles perpendicular and parallel to the jet propagation direction. Simulation results show that this instability generates and amplifies highly nonuniform, small-scale magnetic fields, which contribute to the electron's transverse deflection behind the jet head. The "jitter' I radiation from deflected electrons can have different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants. The growth rate of the Weibel instability and the resulting particle acceleration depend on the magnetic field strength and orientation, and on the initial particle distribution function. In this presentation we explore some of the dependencies of the Weibel instability and resulting particle acceleration on the magnetic field strength and orientation, and the particle distribution function.

  19. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  20. Dynamics of electron injection and acceleration driven by laser wakefield in tailored density profiles

    DOE PAGES

    Lee, Patrick; Maynard, G.; Audet, T. L.; ...

    2016-11-16

    The dynamics of electron acceleration driven by laser wakefield is studied in detail using the particle-in-cell code WARP with the objective to generate high-quality electron bunches with narrow energy spread and small emittance, relevant for the electron injector of a multistage accelerator. Simulation results, using experimentally achievable parameters, show that electron bunches with an energy spread of ~11% can be obtained by using an ionization-induced injection mechanism in a mm-scale length plasma. By controlling the focusing of a moderate laser power and tailoring the longitudinal plasma density profile, the electron injection beginning and end positions can be adjusted, while themore » electron energy can be finely tuned in the last acceleration section.« less

  1. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  2. Post-acceleration of laser driven protons with a compact high field linac

    NASA Astrophysics Data System (ADS)

    Sinigardi, Stefano; Londrillo, Pasquale; Rossi, Francesco; Turchetti, Giorgio; Bolton, Paul R.

    2013-05-01

    We present a start-to-end 3D numerical simulation of a hybrid scheme for the acceleration of protons. The scheme is based on a first stage laser acceleration, followed by a transport line with a solenoid or a multiplet of quadrupoles, and then a post-acceleration section in a compact linac. Our simulations show that from a laser accelerated proton bunch with energy selection at ~ 30MeV, it is possible to obtain a high quality monochromatic beam of 60MeV with intensity at the threshold of interest for medical use. In the present day experiments using solid targets, the TNSA mechanism describes accelerated bunches with an exponential energy spectrum up to a cut-off value typically below ~ 60MeV and wide angular distribution. At the cut-off energy, the number of protons to be collimated and post-accelerated in a hybrid scheme are still too low. We investigate laser-plasma acceleration to improve the quality and number of the injected protons at ~ 30MeV in order to assure efficient post-acceleration in the hybrid scheme. The results are obtained with 3D PIC simulations using a code where optical acceleration with over-dense targets, transport and post-acceleration in a linac can all be investigated in an integrated framework. The high intensity experiments at Nara are taken as a reference benchmarks for our virtual laboratory. If experimentally confirmed, a hybrid scheme could be the core of a medium sized infrastructure for medical research, capable of producing protons for therapy and x-rays for diagnosis, which complements the development of all optical systems.

  3. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis

    PubMed Central

    Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B.; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain

    2017-01-01

    Abstract Background: The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Results: Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this

  4. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  5. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning

    PubMed Central

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-01-01

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual

  6. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning.

    PubMed

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-03-16

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual

  7. Code of Fair Testing Practices in Education (Revised)

    ERIC Educational Resources Information Center

    Educational Measurement: Issues and Practice, 2005

    2005-01-01

    A note from the Working Group of the Joint Committee on Testing Practices: The "Code of Fair Testing Practices in Education (Code)" prepared by the Joint Committee on Testing Practices (JCTP) has just been revised for the first time since its initial introduction in 1988. The revision of the Code was inspired primarily by the revision of…

  8. World Breastfeeding Week 1994: making the Code work.

    PubMed

    1994-01-01

    WHO adopted the International Code of Marketing of Breastmilk Substitutes in 1981, with the US being the only member voting against it. US abandoned its opposition and voted for the International Code at the World Health Assembly in May 1994. The US was also part of a unanimous vote to promote a resolution that clearly proclaims breast milk to be better than breast milk substitutes and the best food for infants. World Breastfeeding Week 1994 began more efforts to promote the International Code. In 1994, through its Making the Code Work campaign, the World Alliance for Breastfeeding Action (WABA) will work on increasing awareness about the mission and promise of the International Code, notify governments of the Innocenti target date, call for governments to introduce rules and regulations based on the International Code, and encourage public interest groups, professional organizations, and the general public to monitor enforcement of the Code. So far, 11 countries have passed legislation including all or almost all provisions of the International Code. Governments of 36 countries have passed legislation including only some provisions of the International Code. The International Baby Food Action Network (IBFAN), a coalition of more than 140 breastfeeding promotion groups, monitors implementation of the Code worldwide. IBFAN substantiates 1000s of violations of the Code in its report, Breaking the Rules 1994. The violations consist of promoting breast milk substitutes to health workers, using labels describing a brand of formula in idealizing terms, or using labels that do not have warnings in the local language. We should familiarize ourselves with the provisions of the International Code and the status of the Code in our country. WABA provides an action folder which contains basic background information on the code and action ideas.

  9. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  10. Electron linear accelerator system for natural rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Rimjaem, S.; Kongmon, E.; Rhodes, M. W.; Saisut, J.; Thongbai, C.

    2017-09-01

    Development of an electron accelerator system, beam diagnostic instruments, an irradiation apparatus and electron beam processing methodology for natural rubber vulcanization is underway at the Plasma and Beam Physics Research Facility, Chiang Mai University, Thailand. The project is carried out with the aims to improve the qualities of natural rubber products. The system consists of a DC thermionic electron gun, 5-cell standing-wave radio-frequency (RF) linear accelerator (linac) with side-coupling cavities and an electron beam irradiation apparatus. This system is used to produce electron beams with an adjustable energy between 0.5 and 4 MeV and a pulse current of 10-100 mA at a pulse repetition rate of 20-400 Hz. An average absorbed dose between 160 and 640 Gy is expected to be archived for 4 MeV electron beam when the accelerator is operated at 400 Hz. The research activities focus firstly on assembling of the accelerator system, study on accelerator properties and electron beam dynamic simulations. The resonant frequency of the RF linac in π/2 operating mode is 2996.82 MHz for the operating temperature of 35 °C. The beam dynamic simulations were conducted by using the code ASTRA. Simulation results suggest that electron beams with an average energy of 4.002 MeV can be obtained when the linac accelerating gradient is 41.7 MV/m. The rms transverse beam size and normalized rms transverse emittance at the linac exit are 0.91 mm and 10.48 π mm·mrad, respectively. This information can then be used as the input data for Monte Carlo simulations to estimate the electron beam penetration depth and dose distribution in the natural rubber latex. The study results from this research will be used to define optimal conditions for natural rubber vulcanization with different electron beam energies and doses. This is very useful for development of future practical industrial accelerator units.

  11. Emittance Growth in the DARHT-II Linear Induction Accelerator

    DOE PAGES

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; ...

    2017-10-03

    The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less

  12. Emittance Growth in the DARHT-II Linear Induction Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.

    The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less

  13. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  14. Probabilistic seismic hazard zonation for the Cuban building code update

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Llanes-Buron, C.

    2013-05-01

    A probabilistic seismic hazard assessment has been performed in response to a revision and update of the Cuban building code (NC-46-99) for earthquake-resistant building construction. The hazard assessment have been done according to the standard probabilistic approach (Cornell, 1968) and importing the procedures adopted by other nations dealing with the problem of revising and updating theirs national building codes. Problems of earthquake catalogue treatment, attenuation of peak and spectral ground acceleration, as well as seismic source definition have been rigorously analyzed and a logic-tree approach was used to represent the inevitable uncertainties encountered through the whole seismic hazard estimation process. The seismic zonation proposed here, is formed by a map where it is reflected the behaviour of the spectral acceleration values for short (0.2 seconds) and large (1.0 seconds) periods on rock conditions with a 1642 -year return period, which being considered as maximum credible earthquake (ASCE 07-05). In addition, other three design levels are proposed (severe earthquake: with a 808 -year return period, ordinary earthquake: with a 475 -year return period and minimum earthquake: with a 225 -year return period). The seismic zonation proposed here fulfils the international standards (IBC-ICC) as well as the world tendencies in this thematic.

  15. Turbulent Heating and Wave Pressure in Solar Wind Acceleration Modeling: New Insights to Empirical Forecasting of the Solar Wind

    NASA Astrophysics Data System (ADS)

    Woolsey, L. N.; Cranmer, S. R.

    2013-12-01

    The study of solar wind acceleration has made several important advances recently due to improvements in modeling techniques. Existing code and simulations test the competing theories for coronal heating, which include reconnection/loop-opening (RLO) models and wave/turbulence-driven (WTD) models. In order to compare and contrast the validity of these theories, we need flexible tools that predict the emergent solar wind properties from a wide range of coronal magnetic field structures such as coronal holes, pseudostreamers, and helmet streamers. ZEPHYR (Cranmer et al. 2007) is a one-dimensional magnetohydrodynamics code that includes Alfven wave generation and reflection and the resulting turbulent heating to accelerate solar wind in open flux tubes. We present the ZEPHYR output for a wide range of magnetic field geometries to show the effect of the magnetic field profiles on wind properties. We also investigate the competing acceleration mechanisms found in ZEPHYR to determine the relative importance of increased gas pressure from turbulent heating and the separate pressure source from the Alfven waves. To do so, we developed a code that will become publicly available for solar wind prediction. This code, TEMPEST, provides an outflow solution based on only one input: the magnetic field strength as a function of height above the photosphere. It uses correlations found in ZEPHYR between the magnetic field strength at the source surface and the temperature profile of the outflow solution to compute the wind speed profile based on the increased gas pressure from turbulent heating. With this initial solution, TEMPEST then adds in the Alfven wave pressure term to the modified Parker equation and iterates to find a stable solution for the wind speed. This code, therefore, can make predictions of the wind speeds that will be observed at 1 AU based on extrapolations from magnetogram data, providing a useful tool for empirical forecasting of the sol! ar wind.

  16. Tree ferns: monophyletic groups and their relationships as revealed by four protein-coding plastid loci.

    PubMed

    Korall, Petra; Pryer, Kathleen M; Metzgar, Jordan S; Schneider, Harald; Conant, David S

    2006-06-01

    Tree ferns are a well-established clade within leptosporangiate ferns. Most of the 600 species (in seven families and 13 genera) are arborescent, but considerable morphological variability exists, spanning the giant scaly tree ferns (Cyatheaceae), the low, erect plants (Plagiogyriaceae), and the diminutive endemics of the Guayana Highlands (Hymenophyllopsidaceae). In this study, we investigate phylogenetic relationships within tree ferns based on analyses of four protein-coding, plastid loci (atpA, atpB, rbcL, and rps4). Our results reveal four well-supported clades, with genera of Dicksoniaceae (sensu ) interspersed among them: (A) (Loxomataceae, (Culcita, Plagiogyriaceae)), (B) (Calochlaena, (Dicksonia, Lophosoriaceae)), (C) Cibotium, and (D) Cyatheaceae, with Hymenophyllopsidaceae nested within. How these four groups are related to one other, to Thyrsopteris, or to Metaxyaceae is weakly supported. Our results show that Dicksoniaceae and Cyatheaceae, as currently recognised, are not monophyletic and new circumscriptions for these families are needed.

  17. Variations of the relative abundances of He, (C,N,O) and Fe-group nuclei in solar cosmic rays and their relationship to solar particle acceleration

    NASA Technical Reports Server (NTRS)

    Bertsch, D. L.; Biswas, S.; Fichtel, C. E.; Pellerin, C. J.; Reames, D. V.

    1973-01-01

    Measurements of the flux of helium nuclei in the 24 January 1971 event and of helium and (C,N,O) nuclei in the 1 September 1971 event are combined with previous measurements to obtain the relative abundances of helium, (C,N,O), and Fe-group nuclei in these events. These data are then summarized together with previously reported results to show that, even when the same detector system using a dE/dx plus range technique is used, differences in the He/(C,N,O) value in the same energy/nucleon interval are observed in solar cosmic ray events. Further, when the He/(C,N,O) value is lower the He/(Fe-group nuclei) value is also systematically lower in these large events. When solar particle acceleration theory is analyzed, it is seen that the results suggest that, for large events, Coulomb energy loss probably does not play a major role in determining solar particle composition at higher energies (10 MeV). The variations in multicharged nuclei composition are more likely due to partial ionization during the acceleration phase.

  18. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  19. Activation of accelerator construction materials by heavy ions

    NASA Astrophysics Data System (ADS)

    Katrík, P.; Mustafin, E.; Hoffmann, D. H. H.; Pavlovič, M.; Strašík, I.

    2015-12-01

    Activation data for an aluminum target irradiated by 200 MeV/u 238U ion beam are presented in the paper. The target was irradiated in the stacked-foil geometry and analyzed using gamma-ray spectroscopy. The purpose of the experiment was to study the role of primary particles, projectile fragments, and target fragments in the activation process using the depth profiling of residual activity. The study brought information on which particles contribute dominantly to the target activation. The experimental data were compared with the Monte Carlo simulations by the FLUKA 2011.2c.0 code. This study is a part of a research program devoted to activation of accelerator construction materials by high-energy (⩾200 MeV/u) heavy ions at GSI Darmstadt. The experimental data are needed to validate the computer codes used for simulation of interaction of swift heavy ions with matter.

  20. Attention in Relation to Coding and Planning in Reading

    ERIC Educational Resources Information Center

    Mahapatra, Shamita

    2015-01-01

    A group of 50 skilled readers and a group of 50 less-skilled readers of Grade 5 matched for age and intelligence and selected on the basis of their proficiency in reading comprehension were tested for their competence in word reading and the processes of attention, simultaneous coding, successive coding and planning at three levels, i.e.,…

  1. First muon acceleration using a radio-frequency accelerator

    NASA Astrophysics Data System (ADS)

    Bae, S.; Choi, H.; Choi, S.; Fukao, Y.; Futatsukawa, K.; Hasegawa, K.; Iijima, T.; Iinuma, H.; Ishida, K.; Kawamura, N.; Kim, B.; Kitamura, R.; Ko, H. S.; Kondo, Y.; Li, S.; Mibe, T.; Miyake, Y.; Morishita, T.; Nakazawa, Y.; Otani, M.; Razuvaev, G. P.; Saito, N.; Shimomura, K.; Sue, Y.; Won, E.; Yamazaki, T.

    2018-05-01

    Muons have been accelerated by using a radio-frequency accelerator for the first time. Negative muonium atoms (Mu- ), which are bound states of positive muons (μ+) and two electrons, are generated from μ+'s through the electron capture process in an aluminum degrader. The generated Mu- 's are initially electrostatically accelerated and injected into a radio-frequency quadrupole linac (RFQ). In the RFQ, the Mu- 's are accelerated to 89 keV. The accelerated Mu- 's are identified by momentum measurement and time of flight. This compact muon linac opens the door to various muon accelerator applications including particle physics measurements and the construction of a transmission muon microscope.

  2. Facilitating Grade Acceleration: Revisiting the Wisdom of John Feldhusen

    ERIC Educational Resources Information Center

    Culross, Rita R.; Jolly, Jennifer L.; Winkler, Daniel

    2013-01-01

    This article revisits the 1986 Feldhusen, Proctor, and Black recommendations on grade skipping. These recommendations originally appeared as 12 guidelines. In this article, the guidelines are grouped into three general categories: how to screen accelerant candidates, how to engage with the adults in the acceleration process (e.g., teachers,…

  3. Pickup ion acceleration in the successive appearance of corotating interaction regions

    NASA Astrophysics Data System (ADS)

    Tsubouchi, K.

    2017-04-01

    Acceleration of pickup ions (PUIs) in an environment surrounded by a pair of corotating interaction regions (CIRs) was investigated by numerical simulations using a hybrid code. Energetic particles associated with CIRs have been considered to be a result of the acceleration at their shock boundaries, but recent observations identified the ion flux peaks in the sub-MeV to MeV energy range in the rarefaction region, where two separate CIRs were likely connected by the magnetic field. Our simulation results confirmed these observational features. As the accelerated PUIs repeatedly bounce back and forth along the field lines between the reverse shock of the first CIR and the forward shock of the second one, the energetic population is accumulated in the rarefaction region. It was also verified that PUI acceleration in the dual CIR system had two different stages. First, because PUIs have large gyroradii, multiple shock crossing is possible for several tens of gyroperiods, and there is an energy gain in the component parallel to the magnetic field via shock drift acceleration. Second, as the field rarefaction evolves and the radial magnetic field becomes dominant, Fermi-type reflection takes place at the shock. The converging nature of two shocks results in a net energy gain. The PUI energy acquired through these processes is close to 0.5 MeV, which may be large enough for further acceleration, possibly resulting in the source of anomalous cosmic rays.

  4. Plasma Wakefield Acceleration of an Intense Positron Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blue, B

    2004-04-21

    predictions made by the 3-D PIC code. The work presented in this dissertation will show that plasma wakefield accelerators are an attractive technology for future particle accelerators.« less

  5. Multimegawatt cyclotron autoresonance accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirshfield, J.L.; LaPointe, M.A.; Ganguly, A.K.

    1996-05-01

    Means are discussed for generation of high-quality multimegawatt gyrating electron beams using rf gyroresonant acceleration. TE{sub 111}-mode cylindrical cavities in a uniform axial magnetic field have been employed for beam acceleration since 1968; such beams have more recently been employed for generation of radiation at harmonics of the gyration frequency. Use of a TE{sub 11}-mode waveguide for acceleration, rather than a cavity, is discussed. It is shown that the applied magnetic field and group velocity axial tapers allow resonance to be maintained along a waveguide, but that this is impractical in a cavity. In consequence, a waveguide cyclotron autoresonance acceleratormore » (CARA) can operate with near-100{percent} efficiency in power transfer from rf source to beam, while cavity accelerators will, in practice, have efficiency values limited to about 40{percent}. CARA experiments are described in which an injected beam of up to 25 A, 95 kV has had up to 7.2 MW of rf power added, with efficiencies of up to 96{percent}. Such levels of efficiency are higher than observed previously in any fast-wave interaction, and are competitive with efficiency values in industrial linear accelerators. Scaling arguments suggest that good quality gyrating megavolt beams with peak and average powers of 100 MW and 100 kW can be produced using an advanced CARA, with applications in the generation of high-power microwaves and for possible remediation of flue gas pollutants. {copyright} {ital 1996 American Institute of Physics.}« less

  6. Optimized operation of dielectric laser accelerators: Multibunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-06-01

    We present a self-consistent analysis to determine the optimal charge, gradient, and efficiency for laser driven accelerators operating with a train of microbunches. Specifically, we account for the beam loading reduction on the material occurring at the dielectric-vacuum interface. In the case of a train of microbunches, such beam loading effect could be detrimental due to energy spread, however this may be compensated by a tapered laser pulse. We ultimately propose an optimization procedure with an analytical solution for group velocity which equals to half the speed of light. This optimization results in a maximum efficiency 20% lower than the single bunch case, and a total accelerated charge of 1 06 electrons in the train. The approach holds promise for improving operations of dielectric laser accelerators and may have an impact on emerging laser accelerators driven by high-power optical lasers.

  7. Accelerating the Pace of Protein Functional Annotation With Intel Xeon Phi Coprocessors.

    PubMed

    Feinstein, Wei P; Moreno, Juana; Jarrell, Mark; Brylinski, Michal

    2015-06-01

    Intel Xeon Phi is a new addition to the family of powerful parallel accelerators. The range of its potential applications in computationally driven research is broad; however, at present, the repository of scientific codes is still relatively limited. In this study, we describe the development and benchmarking of a parallel version of eFindSite, a structural bioinformatics algorithm for the prediction of ligand-binding sites in proteins. Implemented for the Intel Xeon Phi platform, the parallelization of the structure alignment portion of eFindSite using pragma-based OpenMP brings about the desired performance improvements, which scale well with the number of computing cores. Compared to a serial version, the parallel code runs 11.8 and 10.1 times faster on the CPU and the coprocessor, respectively; when both resources are utilized simultaneously, the speedup is 17.6. For example, ligand-binding predictions for 501 benchmarking proteins are completed in 2.1 hours on a single Stampede node equipped with the Intel Xeon Phi card compared to 3.1 hours without the accelerator and 36.8 hours required by a serial version. In addition to the satisfactory parallel performance, porting existing scientific codes to the Intel Xeon Phi architecture is relatively straightforward with a short development time due to the support of common parallel programming models by the coprocessor. The parallel version of eFindSite is freely available to the academic community at www.brylinski.org/efindsite.

  8. PARTICLE ACCELERATOR

    DOEpatents

    Teng, L.C.

    1960-01-19

    ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.

  9. Particle Acceleration, Magnetic Field Generation and Emission from Relativistic Jets and Supernova Remnants

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Hartmann, D. H.; Hardee, P.; Hededal, C.; Mizunno, Y.; Fishman, G. J.

    2006-01-01

    We performed numerical simulations of particle acceleration, magnetic field generation, and emission from shocks in order to understand the observed emission from relativistic jets and supernova remnants. The investigation involves the study of collisionless shocks, where the Weibel instability is responsible for particle acceleration as well as magnetic field generation. A 3-D relativistic particle-in-cell (RPIC) code has been used to investigate the shock processes in electron-positron plasmas. The evolution of theWeibe1 instability and its associated magnetic field generation and particle acceleration are studied with two different jet velocities (0 = 2,5 - slow, fast) corresponding to either outflows in supernova remnants or relativistic jets, such as those found in AGNs and microquasars. Slow jets have intrinsically different structures in both the generated magnetic fields and the accelerated particle spectrum. In particular, the jet head has a very weak magnetic field and the ambient electrons are strongly accelerated and dragged by the jet particles. The simulation results exhibit jitter radiation from inhomogeneous magnetic fields, generated by the Weibel instability, which has different spectral properties than standard synchrotron emission in a homogeneous magnetic field.

  10. Progress towards a world-wide code of conduct

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J.A.N.; Berleur, J.

    1994-12-31

    In this paper the work of the International Federation for Information Processing (IFIP) Task Group on Ethics is described and the recommendations presented to the General Assembly are reviewed. While a common code of ethics or conduct has been not recommended for consideration by the member societies of IMP, a set of guidelines for the establishment and evaluation of codes has been produced and procedures for the assistance of code development have been established within IMP. This paper proposes that the data collected by the Task Group and the proposed guidelines can be used as a tool for the studymore » of codes of practice providing a teachable, learnable educational module in courses related to the ethics of computing and computation, and looks at the next steps in bringing ethical awareness to the IT community.« less

  11. Flattening filter-free accelerators: a report from the AAPM Therapy Emerging Technology Assessment Work Group.

    PubMed

    Xiao, Ying; Kry, Stephen F; Popple, Richard; Yorke, Ellen; Papanikolaou, Niko; Stathakis, Sotirios; Xia, Ping; Huq, Saiful; Bayouth, John; Galvin, James; Yin, Fang-Fang

    2015-05-08

    This report describes the current state of flattening filter-free (FFF) radiotherapy beams implemented on conventional linear accelerators, and is aimed primarily at practicing medical physicists. The Therapy Emerging Technology Assessment Work Group of the American Association of Physicists in Medicine (AAPM) formed a writing group to assess FFF technology. The published literature on FFF technology was reviewed, along with technical specifications provided by vendors. Based on this information, supplemented by the clinical experience of the group members, consensus guidelines and recommendations for implementation of FFF technology were developed. Areas in need of further investigation were identified. Removing the flattening filter increases beam intensity, especially near the central axis. Increased intensity reduces treatment time, especially for high-dose stereotactic radiotherapy/radiosurgery (SRT/SRS). Furthermore, removing the flattening filter reduces out-of-field dose and improves beam modeling accuracy. FFF beams are advantageous for small field (e.g., SRS) treatments and are appropriate for intensity-modulated radiotherapy (IMRT). For conventional 3D radiotherapy of large targets, FFF beams may be disadvantageous compared to flattened beams because of the heterogeneity of FFF beam across the target (unless modulation is employed). For any application, the nonflat beam characteristics and substantially higher dose rates require consideration during the commissioning and quality assurance processes relative to flattened beams, and the appropriate clinical use of the technology needs to be identified. Consideration also needs to be given to these unique characteristics when undertaking facility planning. Several areas still warrant further research and development. Recommendations pertinent to FFF technology, including acceptance testing, commissioning, quality assurance, radiation safety, and facility planning, are presented. Examples of clinical

  12. Smoot Cosmology Group

    Science.gov Websites

    Links We bring the universe to you! University of California Berkeley Cosmology Group Lawrence Computational Cosmology Center Institute for Nuclear & Particle Astrophysics Supernova Acceleration Probe

  13. Essay: Robert H. Siemann As Leader of the Advanced Accelerator Research Department

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colby, Eric R.; Hogan, Mark J.; /SLAC

    Robert H. Siemann originally conceived of the Advanced Accelerator Research Department (AARD) as an academic, experimental group dedicated to probing the technical limitations of accelerators while providing excellent educational opportunities for young scientists. The early years of the Accelerator Research Department B, as it was then known, were dedicated to a wealth of mostly student-led experiments to examine the promise of advanced accelerator techniques. High-gradient techniques including millimeter-wave rf acceleration, beam-driven plasma acceleration, and direct laser acceleration were pursued, including tests of materials under rf pulsed heating and short-pulse laser radiation, to establish the ultimate limitations on gradient. As themore » department and program grew, so did the motivation to found an accelerator research center that brought experimentalists together in a test facility environment to conduct a broad range of experiments. The Final Focus Test Beam and later the Next Linear Collider Test Accelerator provided unique experimental facilities for AARD staff and collaborators to carry out advanced accelerator experiments. Throughout the evolution of this dynamic program, Bob maintained a department atmosphere and culture more reminiscent of a university research group than a national laboratory department. His exceptional ability to balance multiple roles as scientist, professor, and administrator enabled the creation and preservation of an environment that fostered technical innovation and scholarship.« less

  14. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  15. Acceleration Modes and Transitions in Pulsed Plasma Accelerators

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Greve, Christine M.

    2018-01-01

    Pulsed plasma accelerators typically operate by storing energy in a capacitor bank and then discharging this energy through a gas, ionizing and accelerating it through the Lorentz body force. Two plasma accelerator types employing this general scheme have typically been studied: the gas-fed pulsed plasma thruster and the quasi-steady magnetoplasmadynamic (MPD) accelerator. The gas-fed pulsed plasma accelerator is generally represented as a completely transient device discharging in approximately 1-10 microseconds. When the capacitor bank is discharged through the gas, a current sheet forms at the breech of the thruster and propagates forward under a j (current density) by B (magnetic field) body force, entraining propellant it encounters. This process is sometimes referred to as detonation-mode acceleration because the current sheet representation approximates that of a strong shock propagating through the gas. Acceleration of the initial current sheet ceases when either the current sheet reaches the end of the device and is ejected or when the current in the circuit reverses, striking a new current sheet at the breech and depriving the initial sheet of additional acceleration. In the quasi-steady MPD accelerator, the pulse is lengthened to approximately 1 millisecond or longer and maintained at an approximately constant level during discharge. The time over which the transient phenomena experienced during startup typically occur is short relative to the overall discharge time, which is now long enough for the plasma to assume a relatively steady-state configuration. The ionized gas flows through a stationary current channel in a manner that is sometimes referred to as the deflagration-mode of operation. The plasma experiences electromagnetic acceleration as it flows through the current channel towards the exit of the device. A device that had a short pulse length but appeared to operate in a plasma acceleration regime different from the gas-fed pulsed plasma

  16. SAMS Acceleration Measurements on MIR

    NASA Technical Reports Server (NTRS)

    Moskowitz, Milton E.; Hrovat, Kenneth; Finkelstein, Robert; Reckart, Timothy

    1997-01-01

    During NASA Increment 3 (September 1996 to January 1997), about 5 gigabytes of acceleration data were collected by the Space Acceleration Measurement System (SAMS) onboard the Russian Space Station, Mir. The data were recorded on 11 optical disks and were returned to Earth on STS-81. During this time, SAMS data were collected in the Priroda module to support the following experiments: the Mir Structural Dynamics Experiment (MiSDE) and Binary Colloidal Alloy Tests (BCAT). This report points out some of the salient features of the microgravity environment to which these experiments were exposed. Also documented are mission events of interest such as the docked phase of STS-81 operations, a Progress engine burn, attitude control thruster operation, and crew exercise. Also included are a description of the Mir module orientations, and the panel notations within the modules. This report presents an overview of the SAMS acceleration measurements recorded by 10 Hz and 100 Hz sensor heads. Variations in the acceleration environment caused by unique activities such as crew exercise and life-support fans are presented. The analyses included herein complement those presented in previous mission summary reports published by the Principal Investigator Microgravity Services (PIMS) group.

  17. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  18. Proton and Ion Acceleration using Multi-kJ Lasers

    NASA Astrophysics Data System (ADS)

    Wilks, S. C.; Ma, T.; Kemp, A. J.; Tabak, M.; Link, A. J.; Haefner, C.; Hermann, M. R.; Mariscal, D. A.; Rubenchik, S.; Sterne, P.; Kim, J.; McGuffey, C.; Bhutwala, K.; Beg, F.; Wei, M.; Kerr, S. M.; Sentoku, Y.; Iwata, N.; Norreys, P.; Sevin, A.

    2017-10-01

    Short (<50 ps) laser pulses are capable of accelerating protons and ions from solid (or dense gas jet) targets as demonstrated by a number of laser facilities around the world in the past 20 years accelerating protons to between 1 and 100 MeV, depending on specific laser parameters. Over this time, a distinct scaling with energy has emerged that shows a trend towards increasing maximum accelerated proton (ion) energy with increasing laser energy. We consider the physical basis underlying this scaling, and use this to estimate future results when multi-kJ laser systems begin operating in this new high energy regime. In particular, we consider the effects of laser prepulse, intensity, energy, and pulse length on the number and energy of the ions, as well as target size and composition. We also discuss potential uses of these ion beams in High Energy Density Physics Experiments. This work was performed under the auspices of the U.S. Department of Energy (DOE) by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  19. Candidate molten salt investigation for an accelerator driven subcritical core

    NASA Astrophysics Data System (ADS)

    Sooby, E.; Baty, A.; Beneš, O.; McIntyre, P.; Pogue, N.; Salanne, M.; Sattarov, A.

    2013-09-01

    We report a design for accelerator-driven subcritical fission in a molten salt core (ADSMS) that utilizes a fuel salt composed of NaCl and transuranic (TRU) chlorides. The ADSMS core is designed for fast neutronics (28% of neutrons >1 MeV) to optimize TRU destruction. The choice of a NaCl-based salt offers benefits for corrosion, operating temperature, and actinide solubility as compared with LiF-based fuel salts. A molecular dynamics (MD) code has been used to estimate properties of the molten salt system which are important for ADSMS design but have never been measured experimentally. Results from the MD studies are reported. Experimental measurements of fuel salt properties and studies of corrosion and radiation damage on candidate metals for the core vessel are anticipated. A special thanks is due to Prof. Paul Madden for introducing the ADSMS group to the concept of using the molten salt as the spallation target, rather than a conventional heavy metal spallation target. This feature helps to optimize this core as a Pu/TRU burner.

  20. Sensitivity Analysis of the Off-Normal Conditions of the SPIDER Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veltri, P.; Agostinetti, P.; Antoni, V.

    2011-09-26

    In the context of the development of the 1 MV neutral beam injector for the ITER tokamak, the study on beam formation and acceleration has considerable importance. This effort includes the ion source and accelerator SPIDER (Source for Production of Ions of Deuterium Extracted from an Rf plasma) ion source, planned to be built in Padova, and designed to extract and accelerate a 355 A/m{sup 2} current of H{sup -}(or 285 A/m{sup 2} D{sup -}) up to 100 kV. Exhaustive simulations were already carried out during the accelerator optimization leading to the present design. However, as it is expected thatmore » the accelerator shall operate also in case of pre-programmed or undesired off-normal conditions, the investigation of a large set of off-normal scenarios is necessary. These analyses will also be useful for the evaluation of the real performances of the machine, and should help in interpreting experimental results, or in identifying dangerous operating conditions.The present contribution offers an overview of the results obtained during the investigation of these off-normal conditions, by means of different modeling tools and codes. The results, showed a good flexibility of the device in different operating conditions. Where the consequences of the abnormalities appeared to be problematic further analysis were addressed.« less

  1. Current Research on Non-Coding Ribonucleic Acid (RNA).

    PubMed

    Wang, Jing; Samuels, David C; Zhao, Shilin; Xiang, Yu; Zhao, Ying-Yong; Guo, Yan

    2017-12-05

    Non-coding ribonucleic acid (RNA) has without a doubt captured the interest of biomedical researchers. The ability to screen the entire human genome with high-throughput sequencing technology has greatly enhanced the identification, annotation and prediction of the functionality of non-coding RNAs. In this review, we discuss the current landscape of non-coding RNA research and quantitative analysis. Non-coding RNA will be categorized into two major groups by size: long non-coding RNAs and small RNAs. In long non-coding RNA, we discuss regular long non-coding RNA, pseudogenes and circular RNA. In small RNA, we discuss miRNA, transfer RNA, piwi-interacting RNA, small nucleolar RNA, small nuclear RNA, Y RNA, single recognition particle RNA, and 7SK RNA. We elaborate on the origin, detection method, and potential association with disease, putative functional mechanisms, and public resources for these non-coding RNAs. We aim to provide readers with a complete overview of non-coding RNAs and incite additional interest in non-coding RNA research.

  2. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the

  3. Group theoretical formulation of free fall and projectile motion

    NASA Astrophysics Data System (ADS)

    Düztaş, Koray

    2018-07-01

    In this work we formulate the group theoretical description of free fall and projectile motion. We show that the kinematic equations for constant acceleration form a one parameter group acting on a phase space. We define the group elements ϕ t by their action on the points in the phase space. We also generalize this approach to projectile motion. We evaluate the group orbits regarding their relations to the physical orbits of particles and unphysical solutions. We note that the group theoretical formulation does not apply to more general cases involving a time-dependent acceleration. This method improves our understanding of the constant acceleration problem with its global approach. It is especially beneficial for students who want to pursue a career in theoretical physics.

  4. Biomechanical and Histopathologic Effects of Pulsed-Light Accelerated Epithelium-On/-Off Corneal Collagen Cross-Linking.

    PubMed

    Zhang, Xiaoyu; Sun, Ling; Shen, Yang; Tian, Mi; Zhao, Jing; Zhao, Yu; Li, Meiyan; Zhou, Xingtao

    2017-07-01

    This study aimed to compare the biomechanical and histopathologic effects of transepithelial and accelerated epithelium-off pulsed-light accelerated corneal collagen cross-linking (CXL). A total of 24 New Zealand rabbits were analyzed after sham operation (control) or transepithelial or epithelium-off operation (45 mW/cm for both). The transepithelial group was treated with pulsed-light ultraviolet A for 5 minutes 20 seconds, and the epithelium-off group was treated for 90 seconds. Biomechanical testing, including ultimate stress, Young modulus, and the physiological modulus, was analyzed. Histological changes were evaluated by light microscopy and transmission electron microscopy. The stress-strain curve was nonlinear in both accelerated transepithelial and epithelium-off CXL groups. The stress and elastic moduli were all significantly higher in both experimental groups compared with the control group (P < 0.05), whereas there were no significant differences between the 2 treatment groups (P > 0.05). Six months after the operation, hematoxylin and eosin staining and transmission electron microscopy showed that the subcutaneous collagen fibers were arranged in a regular pattern, and the fiber density was higher in the experimental groups. Both transepithelial and accelerated epithelium-off CXL produced biomechanical and histopathologic improvements, which were not significantly different between the 2 pulsed-light accelerated CXL treatments.

  5. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  6. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  7. Electron-beam dynamics for an advanced flash-radiography accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August Jr.

    2015-06-22

    Beam dynamics issues were assessed for a new linear induction electron accelerator. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Beam physics issues were examined through theoretical analysis and computer simulations, including particle-in cell (PIC) codes. Beam instabilities investigated included beam breakup (BBU), image displacement, diocotron, parametric envelope, ion hose, and the resistive wall instability. Beam corkscrew motion and emittance growth frommore » beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos will result if the same engineering standards and construction details are upheld.« less

  8. Progress on China nuclear data processing code system

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  9. Physics and engineering design of the accelerator and electron dump for SPIDER

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator

  10. Consortium of accelerated pavement testers (CAPT).

    DOT National Transportation Integrated Search

    2016-05-01

    FHWA and a group of state Departments of Transportation from nine of the 14 US Accelerated : Pavement Testing (APT) facilities have proposed the creation of a joint or pooled funded program to : encourage coordination among the various facilities and...

  11. Geospace simulations using modern accelerator processor technology

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.

    2009-12-01

    OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.

  12. High power ring methods and accelerator driven subcritical reactor application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tahar, Malek Haj

    2016-08-07

    High power proton accelerators allow providing, by spallation reaction, the neutron fluxes necessary in the synthesis of fissile material, starting from Uranium 238 or Thorium 232. This is the basis of the concept of sub-critical operation of a reactor, for energy production or nuclear waste transmutation, with the objective of achieving cleaner, safer and more efficient process than today’s technologies allow. Designing, building and operating a proton accelerator in the 500-1000 MeV energy range, CW regime, MW power class still remains a challenge nowadays. There is a limited number of installations at present achieving beam characteristics in that class, e.g.,more » PSI in Villigen, 590 MeV CW beam from a cyclotron, SNS in Oakland, 1 GeV pulsed beam from a linear accelerator, in addition to projects as the ESS in Europe, a 5 MW beam from a linear accelerator. Furthermore, coupling an accelerator to a sub-critical nuclear reactor is a challenging proposition: some of the key issues/requirements are the design of a spallation target to withstand high power densities as well as ensure the safety of the installation. These two domains are the grounds of the PhD work: the focus is on the high power ring methods in the frame of the KURRI FFAG collaboration in Japan: upgrade of the installation towards high intensity is crucial to demonstrate the high beam power capability of FFAG. Thus, modeling of the beam dynamics and benchmarking of different codes was undertaken to validate the simulation results. Experimental results revealed some major losses that need to be understood and eventually overcome. By developing analytical models that account for the field defects, one identified major sources of imperfection in the design of scaling FFAG that explain the important tune variations resulting in the crossing of several betatron resonances. A new formula is derived to compute the tunes and properties established that characterize the effect of the field

  13. The relation between tilt table and acceleration-tolerance and their dependence on stature and physical fitness

    NASA Technical Reports Server (NTRS)

    Klein, K. E.; Backhausen, F.; Bruner, H.; Eichhorn, J.; Jovy, D.; Schotte, J.; Vogt, L.; Wegman, H. M.

    1980-01-01

    A group of 12 highly trained athletes and a group of 12untrained students were subjected to passive changes of position on a tilt table and positive accelerations in a centrifuge. During a 20 min tilt, including two additional respiratory maneuvers, the number of faints and average cardiovascular responses did not differ significantly between the groups. During linear increase of acceleration, the average blackout level was almost identical in both groups. Statistically significant coefficients of product-moment correlation for various relations were obtained. The coefficient of multiple determination computed for the dependence of acceleration tolerance on heart-eye distance and systolic blood pressure at rest allows the explanation of almost 50% of the variation of acceleration tolerance. The maximum oxygen uptake showed the expected significant correlation to the heart rate at rest, but not the acceleration tolerance, or to the cardiovascular responses to tilting.

  14. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  15. Covariant Uniform Acceleration

    NASA Astrophysics Data System (ADS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-04-01

    We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and

  16. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  17. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  18. Analyzing radial acceleration with a smartphone acceleration sensor

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kuhn, Jochen

    2013-03-01

    This paper continues the sequence of experiments using the acceleration sensor of smartphones (for description of the function and the use of the acceleration sensor, see Ref. 1) within this column, in this case for analyzing the radial acceleration.

  19. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  20. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  1. Simulations of laser-driven ion acceleration from a thin CH target

    NASA Astrophysics Data System (ADS)

    Park, Jaehong; Bulanov, Stepan; Ji, Qing; Steinke, Sven; Treffert, Franziska; Vay, Jean-Luc; Schenkel, Thomas; Esarey, Eric; Leemans, Wim; Vincenti, Henri

    2017-10-01

    2D and 3D computer simulations of laser driven ion acceleration from a thin CH foil using code WARP were performed. As the foil thickness varies from a few nm to μm, the simulations confirm that the acceleration mechanism transitions from the RPA (radiation pressure acceleration) to the TNSA (target normal sheath acceleration). In the TNSA regime, with the CH target thickness of 1 μ m and a pre-plasma ahead of the target, the simulations show the production of the collimated proton beam with the maximum energy of about 10 MeV. This agrees with the experimental results obtained at the BELLA laser facility (I 5 × 18 W / cm2 , λ = 800 nm). Furthermore, the maximum proton energy dependence on different setups of the initialization, i.e., different angles of the laser incidence from the target normal axis, different gradient scales and distributions of the pre-plasma, was explored. This work was supported by LDRD funding from LBNL, provided by the U.S. DOE under Contract No. DE-AC02-05CH11231, and used resources of the NERSC, a DOE office of Science User Facility supported by the U.S. DOE under Contract No. DE-AC02-05CH11231.

  2. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-05-30

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis.

    PubMed

    Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain; Jelinsky, Scott A

    2017-05-01

    The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in <6 months by providing rapid access to highly skilled programmers with specialized, difficult-to-find skill sets. Through a crowd-based contest a combination of computational, numeric, and algorithmic approaches was identified that accelerated the logistic regression in PLINK 1.07 by 18- to 45-fold. Combining contest-derived logistic regression code with coarse-grained parallelization, multithreading, and associated changes to data initialization code further developed through distributed innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated

  4. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  5. Coding System for the First Grade Reading Group Study.

    ERIC Educational Resources Information Center

    Brophy, Jere; And Others

    The First-Grade Reading Group Study is an experimental examination of teaching behaviors and their effects in first-grade reading groups. The specific teaching behaviors of interest are defined by a model for small group instruction which describes organization and management of the class, and ways of responding to children's answers that are…

  6. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  7. Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siemann, R.H.; /SLAC

    Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.

  8. Electron-Beam Dynamics for an Advanced Flash-Radiography Accelerator

    DOE PAGES

    Ekdahl, Carl

    2015-11-17

    Beam dynamics issues were assessed for a new linear induction electron accelerator being designed for multipulse flash radiography of large explosively driven hydrodynamic experiments. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Furthermore, beam physics issues were examined through theoretical analysis and computer simulations, including particle-in-cell codes. Beam instabilities investigated included beam breakup, image displacement, diocotron, parametric envelope, ion hose, and themore » resistive wall instability. The beam corkscrew motion and emittance growth from beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos National Laboratory will result if the same engineering standards and construction details are upheld.« less

  9. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of themore » physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.« less

  10. The effect of cosmic-ray acceleration on supernova blast wave dynamics

    NASA Astrophysics Data System (ADS)

    Pais, M.; Pfrommer, C.; Ehlert, K.; Pakmor, R.

    2018-05-01

    Non-relativistic shocks accelerate ions to highly relativistic energies provided that the orientation of the magnetic field is closely aligned with the shock normal (quasi-parallel shock configuration). In contrast, quasi-perpendicular shocks do not efficiently accelerate ions. We model this obliquity-dependent acceleration process in a spherically expanding blast wave setup with the moving-mesh code AREPO for different magnetic field morphologies, ranging from homogeneous to turbulent configurations. A Sedov-Taylor explosion in a homogeneous magnetic field generates an oblate ellipsoidal shock surface due to the slower propagating blast wave in the direction of the magnetic field. This is because of the efficient cosmic ray (CR) production in the quasi-parallel polar cap regions, which softens the equation of state and increases the compressibility of the post-shock gas. We find that the solution remains self-similar because the ellipticity of the propagating blast wave stays constant in time. This enables us to derive an effective ratio of specific heats for a composite of thermal gas and CRs as a function of the maximum acceleration efficiency. We finally discuss the behavior of supernova remnants expanding into a turbulent magnetic field with varying coherence lengths. For a maximum CR acceleration efficiency of about 15 per cent at quasi-parallel shocks (as suggested by kinetic plasma simulations), we find an average efficiency of about 5 per cent, independent of the assumed magnetic coherence length.

  11. Connection anonymity analysis in coded-WDM PONs

    NASA Astrophysics Data System (ADS)

    Sue, Chuan-Ching

    2008-04-01

    A coded wavelength division multiplexing passive optical network (WDM PON) is presented for fiber to the home (FTTH) systems to protect against eavesdropping. The proposed scheme applies spectral amplitude coding (SAC) with a unipolar maximal-length sequence (M-sequence) code matrix to generate a specific signature address (coding) and to retrieve its matching address codeword (decoding) by exploiting the cyclic properties inherent in array waveguide grating (AWG) routers. In addition to ensuring the confidentiality of user data, the proposed coded-WDM scheme is also a suitable candidate for the physical layer with connection anonymity. Under the assumption that the eavesdropper applies a photo-detection strategy, it is shown that the coded WDM PON outperforms the conventional TDM PON and WDM PON schemes in terms of a higher degree of connection anonymity. Additionally, the proposed scheme allows the system operator to partition the optical network units (ONUs) into appropriate groups so as to achieve a better degree of anonymity.

  12. Combined Modeling of Acceleration, Transport, and Hydrodynamic Response in Solar Flares. 1; The Numerical Model

    NASA Technical Reports Server (NTRS)

    Liu, Wei; Petrosian, Vahe; Mariska, John T.

    2009-01-01

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated the simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a -10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a non thermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.

  13. COMBINED MODELING OF ACCELERATION, TRANSPORT, AND HYDRODYNAMIC RESPONSE IN SOLAR FLARES. I. THE NUMERICAL MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Wei; Petrosian, Vahe; Mariska, John T.

    2009-09-10

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self-consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated themore » simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a {approx}10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a nonthermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.« less

  14. Susceptibility of materials processing experiments to low-level accelerations

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.

    1981-01-01

    The types of material processing experiments being considered for shuttle can be grouped into four categories: (1) contained solidification experiment; (2) quasicontainerless experiments; (3) containerless experiments; and (4) fluids experiments. Low level steady acceleration, compensated and uncompensated transient accelerations, and rotation induced flow factors that must be considered in the acceleration environment of a space vehicle whose importance depends on the type of experiment being performed. Some control of these factors may be exercised by the location and orientation of the experiment relative to shuttle and by the orbit vehicle attitude chosen for mission. The effects of the various residual accelerations can have serious consequence to the control of the experiment and must be factored into the design and operation of the apparatus.

  15. Disease-Specific Trends of Comorbidity Coding and Implications for Risk Adjustment in Hospital Administrative Data.

    PubMed

    Nimptsch, Ulrike

    2016-06-01

    To investigate changes in comorbidity coding after the introduction of diagnosis related groups (DRGs) based prospective payment and whether trends differ regarding specific comorbidities. Nationwide administrative data (DRG statistics) from German acute care hospitals from 2005 to 2012. Observational study to analyze trends in comorbidity coding in patients hospitalized for common primary diseases and the effects on comorbidity-related risk of in-hospital death. Comorbidity coding was operationalized by Elixhauser diagnosis groups. The analyses focused on adult patients hospitalized for the primary diseases of heart failure, stroke, and pneumonia, as well as hip fracture. When focusing the total frequency of diagnosis groups per record, an increase in depth of coding was observed. Between-hospital variations in depth of coding were present throughout the observation period. Specific comorbidity increases were observed in 15 of the 31 diagnosis groups, and decreases in comorbidity were observed for 11 groups. In patients hospitalized for heart failure, shifts of comorbidity-related risk of in-hospital death occurred in nine diagnosis groups, in which eight groups were directed toward the null. Comorbidity-adjusted outcomes in longitudinal administrative data analyses may be biased by nonconstant risk over time, changes in completeness of coding, and between-hospital variations in coding. Accounting for such issues is important when the respective observation period coincides with changes in the reimbursement system or other conditions that are likely to alter clinical coding practice. © Health Research and Educational Trust.

  16. Accelerated Creep Testing of High Strength Aramid Webbing

    NASA Technical Reports Server (NTRS)

    Jones, Thomas C.; Doggett, William R.; Stnfield, Clarence E.; Valverde, Omar

    2012-01-01

    A series of preliminary accelerated creep tests were performed on four variants of 12K and 24K lbf rated Vectran webbing to help develop an accelerated creep test methodology and analysis capability for high strength aramid webbings. The variants included pristine, aged, folded and stitched samples. This class of webbings is used in the restraint layer of habitable, inflatable space structures, for which the lifetime properties are currently not well characterized. The Stepped Isothermal Method was used to accelerate the creep life of the webbings and a novel stereo photogrammetry system was used to measure the full-field strains. A custom MATLAB code is described, and used to reduce the strain data to produce master creep curves for the test samples. Initial results show good correlation between replicates; however, it is clear that a larger number of samples are needed to build confidence in the consistency of the results. It is noted that local fiber breaks affect the creep response in a similar manner to increasing the load, thus raising the creep rate and reducing the time to creep failure. The stitched webbings produced the highest variance between replicates, due to the combination of higher local stresses and thread-on-fiber damage. Large variability in the strength of the webbings is also shown to have an impact on the range of predicted creep life.

  17. Learn-as-you-go acceleration of cosmological parameter estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less

  18. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  19. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with

  20. Fermilab | Tevatron | Accelerator

    Science.gov Websites

    Leading accelerator technology Accelerator complex Illinois Accelerator Research Center Fermilab temperature. They were used to transfer particles from one part of the Fermilab accelerator complex to another center ring of Fermilab's accelerator complex. Before the Tevatron shut down, it had three primary

  1. Application of Plasma Waveguides to High Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milchberg, Howard M

    2013-03-30

    The eventual success of laser-plasma based acceleration schemes for high-energy particle physics will require the focusing and stable guiding of short intense laser pulses in reproducible plasma channels. For this goal to be realized, many scientific issues need to be addressed. These issues include an understanding of the basic physics of, and an exploration of various schemes for, plasma channel formation. In addition, the coupling of intense laser pulses to these channels and the stable propagation of pulses in the channels require study. Finally, new theoretical and computational tools need to be developed to aid in the design and analysismore » of experiments and future accelerators. Here we propose a 3-year renewal of our combined theoretical and experimental program on the applications of plasma waveguides to high-energy accelerators. During the past grant period we have made a number of significant advances in the science of laser-plasma based acceleration. We pioneered the development of clustered gases as a new highly efficient medium for plasma channel formation. Our contributions here include theoretical and experimental studies of the physics of cluster ionization, heating, explosion, and channel formation. We have demonstrated for the first time the generation of and guiding in a corrugated plasma waveguide. The fine structure demonstrated in these guides is only possible with cluster jet heating by lasers. The corrugated guide is a slow wave structure operable at arbitrarily high laser intensities, allowing direct laser acceleration, a process we have explored in detail with simulations. The development of these guides opens the possibility of direct laser acceleration, a true miniature analogue of the SLAC RF-based accelerator. Our theoretical studies during this period have also contributed to the further development of the simulation codes, Wake and QuickPIC, which can be used for both laser driven and beam driven plasma based acceleration

  2. Force, acceleration and velocity during trampoline jumps—a challenging assignment

    NASA Astrophysics Data System (ADS)

    Pendrill, Ann-Marie; Ouattara, Lassana

    2017-11-01

    Bouncing on a trampoline lets the jumper experience the interplay between weightlessness and large forces on the body, as the motion changes between free fall and large acceleration in contact with the trampoline bed. In this work, several groups of students were asked to draw graphs of elevation, velocity and acceleration as a function of time, for two full jumps of the 2012 Olympic gold medal trampoline routine by Rosannagh MacLennan. We hoped that earlier kinaesthetic experiences of trampoline bouncing would help students make connections between the mathematical descriptions of elevation, velocity and acceleration, which is known to be challenging. However, very few of the student responses made reference to personal experiences of forces during bouncing. Most of the responses could be grouped into a few categories, which are presented and discussed in the paper. Although the time dependence of elevation was drawn relatively correctly in most cases, many of the graphs of velocity and acceleration display a lack of understanding of the relation between these different aspects of motion.

  3. Overview of Recent Radiation Transport Code Comparisons for Space Applications

    NASA Astrophysics Data System (ADS)

    Townsend, Lawrence

    Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.

  4. Investigation of starting transients in the thermally choked ram accelerator

    NASA Technical Reports Server (NTRS)

    Burnham, E. A.; Hinkey, J. B.; Bruckner, A. P.

    1992-01-01

    An experimental investigation of the starting transients of the thermally choked ram accelerator is presented in this paper. Construction of a highly instrumented tube section and instrumentation inserts provide high resolution experimental pressure, luminosity, and electromagnetic data of the starting transients. Data obtained prior to and following the entrance diaphragm show detailed development of shock systems in both combustible and inert mixtures. With an evacuated launch tube, starting the diffuser is possible at any Mach number above the Kantrowitz Mach number. The detrimental effects and possible solutions of higher launch tube pressures and excessive obturator leakage (blow-by) are discussed. Ignition of a combustible mixture is demonstrated with both perforated and solid obturators. The relative advantages and disadvantages of each are discussed. Data obtained from these starting experiments enhance the understanding of the ram accelerator, as well as assist in the validation of unsteady, chemically reacting CFD codes.

  5. Homing endonucleases from mobile group I introns: discovery to genome engineering

    PubMed Central

    2014-01-01

    Homing endonucleases are highly specific DNA cleaving enzymes that are encoded within genomes of all forms of microbial life including phage and eukaryotic organelles. These proteins drive the mobility and persistence of their own reading frames. The genes that encode homing endonucleases are often embedded within self-splicing elements such as group I introns, group II introns and inteins. This combination of molecular functions is mutually advantageous: the endonuclease activity allows surrounding introns and inteins to act as invasive DNA elements, while the splicing activity allows the endonuclease gene to invade a coding sequence without disrupting its product. Crystallographic analyses of representatives from all known homing endonuclease families have illustrated both their mechanisms of action and their evolutionary relationships to a wide range of host proteins. Several homing endonucleases have been completely redesigned and used for a variety of genome engineering applications. Recent efforts to augment homing endonucleases with auxiliary DNA recognition elements and/or nucleic acid processing factors has further accelerated their use for applications that demand exceptionally high specificity and activity. PMID:24589358

  6. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE PAGES

    Lee, P.; Audet, T. L.; Lehe, R.; ...

    2015-12-31

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  7. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, P.; Audet, T. L.; Lehe, R.

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  8. Exercise Versus +Gz Acceleration Training

    NASA Technical Reports Server (NTRS)

    Greenleaf, John E.; Simonson, S. R.; Stocks, J. M.; Evans, J. M.; Knapp, C. F.; Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    Decreased working capacity and "orthostatic" intolerance are two major problems for astronauts during and after landing from spaceflight in a return vehicle. The purpose was to test the hypotheses that (1) supine-passive-acceleration training, supine-interval-exercise plus acceleration training, and supine exercise plus acceleration training will improve orthostatic tolerance (OT) in ambulatory men; and that (2) addition of aerobic exercise conditioning will not influence this enhanced OT from that of passive-acceleration training. Seven untrained men (24-38 yr) underwent 3 training regimens (30 min/d x 5d/wk x 3wk on the human-powered centrifuge - HPC): (a) Passive acceleration (alternating +1.0 Gz to 50% Gzmax); (b) Exercise acceleration (alternating 40% - 90% V02max leg cycle exercise plus 50% of HPCmax acceleration); and (c) Combined intermittent exercise-acceleration at 40% to 90% HPCmax. Maximal supine exercise workloads increased (P < 0.05) by 8.3% with Passive, by 12.6% with Exercise, and by 15.4% with Combined; but maximal V02 and HR were unchanged in all groups. Maximal endurance (time to cessation) was unchanged with Passive, but increased (P < 0.05) with Exercise and Combined. Resting pre-tilt HR was elevated by 12.9% (P < 0.05) only after Passive training, suggesting that exercise training attenuated this HR response. All resting pre-tilt blood pressures (SBP, DBP, MAP) were not different pre- vs. post-training. Post-training tilt-tolerance time and HR were increased (P < 0.05) only with Passive training by 37.8% and by 29.1%, respectively. Thus, addition of exercise training attenuated the increased Passive tilt tolerance. Resting (pre-tilt) and post-tilt cardiac R-R interval, stroke volume, end-diastolic volume, and cardiac output were all uniformly reduced (P < 0.05) while peripheral resistance was uniformly increased (P < 0.05) pre-and post-training for the three regimens indicating no effect of any training regimen on those cardiovascular

  9. Modeling Particle Acceleration and Transport at a 2-D CME-Driven Shock

    NASA Astrophysics Data System (ADS)

    Hu, Junxiang; Li, Gang; Ao, Xianzhi; Zank, Gary P.; Verkhoglyadova, Olga

    2017-11-01

    We extend our earlier Particle Acceleration and Transport in the Heliosphere (PATH) model to study particle acceleration and transport at a coronal mass ejection (CME)-driven shock. We model the propagation of a CME-driven shock in the ecliptic plane using the ZEUS-3D code from 20 solar radii to 2 AU. As in the previous PATH model, the initiation of the CME-driven shock is simplified and modeled as a disturbance at the inner boundary. Different from the earlier PATH model, the disturbance is now longitudinally dependent. Particles are accelerated at the 2-D shock via the diffusive shock acceleration mechanism. The acceleration depends on both the parallel and perpendicular diffusion coefficients κ|| and κ⊥ and is therefore shock-obliquity dependent. Following the procedure used in Li, Shalchi, et al. (k href="#jgra53857-bib-0045"/>), we obtain the particle injection energy, the maximum energy, and the accelerated particle spectra at the shock front. Once accelerated, particles diffuse and convect in the shock complex. The diffusion and convection of these particles are treated using a refined 2-D shell model in an approach similar to Zank et al. (k href="#jgra53857-bib-0089"/>). When particles escape from the shock, they propagate along and across the interplanetary magnetic field. The propagation is modeled using a focused transport equation with the addition of perpendicular diffusion. We solve the transport equation using a backward stochastic differential equation method where adiabatic cooling, focusing, pitch angle scattering, and cross-field diffusion effects are all included. Time intensity profiles and instantaneous particle spectra as well as particle pitch angle distributions are shown for two example CME shocks.

  10. Accelerator infrastructure in Europe: EuCARD 2011

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2011-10-01

    The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.

  11. Early Experiences Writing Performance Portable OpenMP 4 Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Hernandez, Oscar R

    In this paper, we evaluate the recently available directives in OpenMP 4 to parallelize a computational kernel using both the traditional shared memory approach and the newer accelerator targeting capabilities. In addition, we explore various transformations that attempt to increase application performance portability, and examine the expressiveness and performance implications of using these approaches. For example, we want to understand if the target map directives in OpenMP 4 improve data locality when mapped to a shared memory system, as opposed to the traditional first touch policy approach in traditional OpenMP. To that end, we use recent Cray and Intel compilersmore » to measure the performance variations of a simple application kernel when executed on the OLCF s Titan supercomputer with NVIDIA GPUs and the Beacon system with Intel Xeon Phi accelerators attached. To better understand these trade-offs, we compare our results from traditional OpenMP shared memory implementations to the newer accelerator programming model when it is used to target both the CPU and an attached heterogeneous device. We believe the results and lessons learned as presented in this paper will be useful to the larger user community by providing guidelines that can assist programmers in the development of performance portable code.« less

  12. Piezoelectric particle accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemp, Mark A.; Jongewaard, Erik N.; Haase, Andrew A.

    2017-08-29

    A particle accelerator is provided that includes a piezoelectric accelerator element, where the piezoelectric accelerator element includes a hollow cylindrical shape, and an input transducer, where the input transducer is disposed to provide an input signal to the piezoelectric accelerator element, where the input signal induces a mechanical excitation of the piezoelectric accelerator element, where the mechanical excitation is capable of generating a piezoelectric electric field proximal to an axis of the cylindrical shape, where the piezoelectric accelerator is configured to accelerate a charged particle longitudinally along the axis of the cylindrical shape according to the piezoelectric electric field.

  13. Cavitation onset caused by acceleration

    PubMed Central

    Pan, Zhao; Kiyama, Akihito; Tagawa, Yoshiyuki; Daily, David J.; Thomson, Scott L.; Hurd, Randy

    2017-01-01

    Striking the top of a liquid-filled bottle can shatter the bottom. An intuitive interpretation of this event might label an impulsive force as the culprit in this fracturing phenomenon. However, high-speed photography reveals the formation and collapse of tiny bubbles near the bottom before fracture. This observation indicates that the damaging phenomenon of cavitation is at fault. Cavitation is well known for causing damage in various applications including pipes and ship propellers, making accurate prediction of cavitation onset vital in several industries. However, the conventional cavitation number as a function of velocity incorrectly predicts the cavitation onset caused by acceleration. This unexplained discrepancy leads to the derivation of an alternative dimensionless term from the equation of motion, predicting cavitation as a function of acceleration and fluid depth rather than velocity. Two independent research groups in different countries have tested this theory; separate series of experiments confirm that an alternative cavitation number, presented in this paper, defines the universal criteria for the onset of acceleration-induced cavitation. PMID:28739956

  14. Cavitation onset caused by acceleration.

    PubMed

    Pan, Zhao; Kiyama, Akihito; Tagawa, Yoshiyuki; Daily, David J; Thomson, Scott L; Hurd, Randy; Truscott, Tadd T

    2017-07-24

    Striking the top of a liquid-filled bottle can shatter the bottom. An intuitive interpretation of this event might label an impulsive force as the culprit in this fracturing phenomenon. However, high-speed photography reveals the formation and collapse of tiny bubbles near the bottom before fracture. This observation indicates that the damaging phenomenon of cavitation is at fault. Cavitation is well known for causing damage in various applications including pipes and ship propellers, making accurate prediction of cavitation onset vital in several industries. However, the conventional cavitation number as a function of velocity incorrectly predicts the cavitation onset caused by acceleration. This unexplained discrepancy leads to the derivation of an alternative dimensionless term from the equation of motion, predicting cavitation as a function of acceleration and fluid depth rather than velocity. Two independent research groups in different countries have tested this theory; separate series of experiments confirm that an alternative cavitation number, presented in this paper, defines the universal criteria for the onset of acceleration-induced cavitation.

  15. Cavitation onset caused by acceleration

    NASA Astrophysics Data System (ADS)

    Pan, Zhao; Kiyama, Akihito; Tagawa, Yoshiyuki; Daily, David J.; Thomson, Scott L.; Hurd, Randy; Truscott, Tadd T.

    2017-08-01

    Striking the top of a liquid-filled bottle can shatter the bottom. An intuitive interpretation of this event might label an impulsive force as the culprit in this fracturing phenomenon. However, high-speed photography reveals the formation and collapse of tiny bubbles near the bottom before fracture. This observation indicates that the damaging phenomenon of cavitation is at fault. Cavitation is well known for causing damage in various applications including pipes and ship propellers, making accurate prediction of cavitation onset vital in several industries. However, the conventional cavitation number as a function of velocity incorrectly predicts the cavitation onset caused by acceleration. This unexplained discrepancy leads to the derivation of an alternative dimensionless term from the equation of motion, predicting cavitation as a function of acceleration and fluid depth rather than velocity. Two independent research groups in different countries have tested this theory; separate series of experiments confirm that an alternative cavitation number, presented in this paper, defines the universal criteria for the onset of acceleration-induced cavitation.

  16. Auditing Consistency and Usefulness of LOINC Use among Three Large Institutions - Using Version Spaces for Grouping LOINC Codes

    PubMed Central

    Lin, M.C.; Vreeman, D.J.; Huff, S.M.

    2012-01-01

    Objectives We wanted to develop a method for evaluating the consistency and usefulness of LOINC code use across different institutions, and to evaluate the degree of interoperability that can be attained when using LOINC codes for laboratory data exchange. Our specific goals were to: 1) Determine if any contradictory knowledge exists in LOINC. 2) Determine how many LOINC codes were used in a truly interoperable fashion between systems. 3) Provide suggestions for improving the semantic interoperability of LOINC. Methods We collected Extensional Definitions (EDs) of LOINC usage from three institutions. The version space approach was used to divide LOINC codes into small sets, which made auditing of LOINC use across the institutions feasible. We then compared pairings of LOINC codes from the three institutions for consistency and usefulness. Results The number of LOINC codes evaluated were 1,917, 1,267 and 1,693 as obtained from ARUP, Intermountain and Regenstrief respectively. There were 2,022, 2,030, and 2,301 version spaces among ARUP & Intermountain, Intermountain & Regenstrief and ARUP & Regenstrief respectively. Using the EDs as the gold standard, there were 104, 109 and 112 pairs containing contradictory knowledge and there were 1,165, 765 and 1,121 semantically interoperable pairs. The interoperable pairs were classified into three levels: 1) Level I – No loss of meaning, complete information was exchanged by identical codes. 2) Level II – No loss of meaning, but processing of data was needed to make the data completely comparable. 3) Level III – Some loss of meaning. For example, tests with a specific ‘method’ could be rolled-up with tests that were ‘methodless’. Conclusions There are variations in the way LOINC is used for data exchange that result in some data not being truly interoperable across different enterprises. To improve its semantic interoperability, we need to detect and correct any contradictory knowledge within LOINC and add

  17. The weight hierarchies and chain condition of a class of codes from varieties over finite fields

    NASA Technical Reports Server (NTRS)

    Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.

  18. Comparative study of long-term outcomes of accelerated and conventional collagen crosslinking for progressive keratoconus.

    PubMed

    Males, J J; Viswanathan, D

    2018-01-01

    PurposeTo compare the long-term outcomes of accelerated corneal collagen crosslinking (CXL) to conventional CXL for progressive keratoconus.Patients and methodsComparative clinical study of consecutive progressive keratoconic eyes that underwent either accelerated CXL (9 mW/cm 2 ultraviolet A (UVA) light irradiance for 10 min) or conventional CXL (3 mW/cm 2 UVA light irradiance for 30 min). Eyes with minimum 12 months' follow-up were included. Post-procedure changes in keratometry readings (Flat meridian: K1; steep meridian: K2), central corneal thickness (CCT), best spectacle-corrected visual acuity (BSCVA), and manifest refraction spherical equivalent (MRSE) were analysed.ResultsA total of 42 eyes were included. In all, 21 eyes had accelerated CXL (20.5±5.5 months' follow-up) and 21 eyes had conventional CXL group (20.2±5.6 months' follow-up). In the accelerated CXL group, a significant reduction in K2 (P=0.02), however no significant change in K1 (P=0.35) and CCT (P=0.62) was noted. In the conventional CXL group, a significant reduction was seen in K1 (P=0.01) and K2 (P=0.04), but not in CCT (P=0.95). Although both groups exhibited significant reductions in K2 readings, no noteworthy differences were noted between them (P=0.36). Improvements in BSCVA (accelerated CXL; P=0.22 and conventional CXL; P=0.20) and MRSE (accelerated CXL; P=0.97 and conventional CXL; P=0.54) were noted, however were not significant in either group.ConclusionAccelerated and conventional CXL appear to be effective procedures for stabilising progressive keratoconus in the long-term.

  19. Neutron skyshine from end stations of the Continuous Electron Beam Accelerator Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Rai-Ko S.

    1991-12-01

    The MORSE{_}CG code from Oak Ridge National Laboratory was applied to the estimation of the neutron skyshine from three end stations of the Continuous Electron Beam Accelerator Facility (CEBAF), Newport News, VA. Calculations with other methods and an experiment had been directed at assessing the annual neutron dose equivalent at the site boundary. A comparison of results obtained with different methods is given, and the effect of different temperatures and humidities will be discussed.

  20. Neutron skyshine from end stations of the Continuous Electron Beam Accelerator Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Rai-Ko S.

    1991-12-01

    The MORSE{ }CG code from Oak Ridge National Laboratory was applied to the estimation of the neutron skyshine from three end stations of the Continuous Electron Beam Accelerator Facility (CEBAF), Newport News, VA. Calculations with other methods and an experiment had been directed at assessing the annual neutron dose equivalent at the site boundary. A comparison of results obtained with different methods is given, and the effect of different temperatures and humidities will be discussed.

  1. Computer codes developed and under development at Lewis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1992-01-01

    The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.

  2. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  3. Performance of automated and manual coding systems for occupational data: a case study of historical records.

    PubMed

    Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S

    2012-03-01

    Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.

  4. Local expansion flows of galaxies: quantifying acceleration effect of dark energy

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Teerikorpi, P.

    2013-08-01

    The nearest expansion flow of galaxies observed around the Local group is studied as an archetypical example of the newly discovered local expansion flows around groups and clusters of galaxies in the nearby Universe. The flow is accelerating due to the antigravity produced by the universal dark energy background. We introduce a new acceleration measure of the flow which is the dimensionless ``acceleration parameter" Q (x) = x - x-2 depending on the normalized distance x only. The parameter is zero at the zero-gravity distance x = 1, and Q(x) ∝ x, when x ≫ 1. At the distance x = 3, the parameter Q = 2.9. Since the expansion flows have a self-similar structure in normalized variables, we expect that the result is valid as well for all the other expansion flows around groups and clusters of galaxies on the spatial scales from ˜ 1 to ˜ 10 Mpc everywhere in the Universe.

  5. Effect of Exercise Program Speed, Agility, and Quickness (SAQ) in Improving Speed, Agility, and Acceleration

    NASA Astrophysics Data System (ADS)

    Azmi, K.; Kusnanik, N. W.

    2018-01-01

    This study aimed to analyze the effect of speed, agility and quickness training program to increase in speed, agility and acceleration. This study was conducted at 26 soccer players and divided into 2 groups with 13 players each group. Group 1 was given SAQ training program, and Group 2 conventional training program for 8 weeks. This study used a quantitative approach with quasi-experimental method. The design of this study used a matching-only design. Data was collected by testing 30-meter sprint (speed), agility t-test (agility), and run 10 meters (acceleration) during the pretest and posttest. Furthermore, the data was analyzed using paired sample t-test and independent t-test. The results showed: that there was a significant effect of speed, agility and quickness training program in improving in speed, agility and acceleration. In summary, it can be concluded that the speed, agility and quickness training program can improve the speed, agility and acceleration of the soccer players.

  6. Source-to-accelerator quadrupole matching section for a compact linear accelerator

    NASA Astrophysics Data System (ADS)

    Seidl, P. A.; Persaud, A.; Ghiorso, W.; Ji, Q.; Waldron, W. L.; Lal, A.; Vinayakumar, K. B.; Schenkel, T.

    2018-05-01

    Recently, we presented a new approach for a compact radio-frequency (RF) accelerator structure and demonstrated the functionality of the individual components: acceleration units and focusing elements. In this paper, we combine these units to form a working accelerator structure: a matching section between the ion source extraction grids and the RF-acceleration unit and electrostatic focusing quadrupoles between successive acceleration units. The matching section consists of six electrostatic quadrupoles (ESQs) fabricated using 3D-printing techniques. The matching section enables us to capture more beam current and to match the beam envelope to conditions for stable transport in an acceleration lattice. We present data from an integrated accelerator consisting of the source, matching section, and an ESQ doublet sandwiched between two RF-acceleration units.

  7. Highly Productive Application Development with ViennaCL for Accelerators

    NASA Astrophysics Data System (ADS)

    Rupp, K.; Weinbub, J.; Rudolf, F.

    2012-12-01

    The use of graphics processing units (GPUs) for the acceleration of general purpose computations has become very attractive over the last years, and accelerators based on many integrated CPU cores are about to hit the market. However, there are discussions about the benefit of GPU computing when comparing the reduction of execution times with the increased development effort [1]. To counter these concerns, our open-source linear algebra library ViennaCL [2,3] uses modern programming techniques such as generic programming in order to provide a convenient access layer for accelerator and GPU computing. Other GPU-accelerated libraries are primarily tuned for performance, but less tailored to productivity and portability: MAGMA [4] provides dense linear algebra operations via a LAPACK-comparable interface, but no dedicated matrix and vector types. Cusp [5] is closest in functionality to ViennaCL for sparse matrices, but is based on CUDA and thus restricted to devices from NVIDIA. However, no convenience layer for dense linear algebra is provided with Cusp. ViennaCL is written in C++ and uses OpenCL to access the resources of accelerators, GPUs and multi-core CPUs in a unified way. On the one hand, the library provides iterative solvers from the family of Krylov methods, including various preconditioners, for the solution of linear systems typically obtained from the discretization of partial differential equations. On the other hand, dense linear algebra operations are supported, including algorithms such as QR factorization and singular value decomposition. The user application interface of ViennaCL is compatible to uBLAS [6], which is part of the peer-reviewed Boost C++ libraries [7]. This allows to port existing applications based on uBLAS with a minimum of effort to ViennaCL. Conversely, the interface compatibility allows to use the iterative solvers from ViennaCL with uBLAS types directly, thus enabling code reuse beyond CPU-GPU boundaries. Out-of-the-box support

  8. Shannon Entropy of the Canonical Genetic Code

    NASA Astrophysics Data System (ADS)

    Nemzer, Louis

    The probability that a non-synonymous point mutation in DNA will adversely affect the functionality of the resultant protein is greatly reduced if the substitution is conservative. In that case, the amino acid coded by the mutated codon has similar physico-chemical properties to the original. Many simplified alphabets, which group the 20 common amino acids into families, have been proposed. To evaluate these schema objectively, we introduce a novel, quantitative method based on the inherent redundancy in the canonical genetic code. By calculating the Shannon information entropy carried by 1- or 2-bit messages, groupings that best leverage the robustness of the code are identified. The relative importance of properties related to protein folding - like hydropathy and size - and function, including side-chain acidity, can also be estimated. In addition, this approach allows us to quantify the average information value of nucleotide codon positions, and explore the physiological basis for distinguishing between transition and transversion mutations. Supported by NSU PFRDG Grant #335347.

  9. The ADVANCE Code of Conduct for collaborative vaccine studies.

    PubMed

    Kurz, Xavier; Bauchau, Vincent; Mahy, Patrick; Glismann, Steffen; van der Aa, Lieke Maria; Simondon, François

    2017-04-04

    Lessons learnt from the 2009 (H1N1) flu pandemic highlighted factors limiting the capacity to collect European data on vaccine exposure, safety and effectiveness, including lack of rapid access to available data sources or expertise, difficulties to establish efficient interactions between multiple parties, lack of confidence between private and public sectors, concerns about possible or actual conflicts of interest (or perceptions thereof) and inadequate funding mechanisms. The Innovative Medicines Initiative's Accelerated Development of VAccine benefit-risk Collaboration in Europe (ADVANCE) consortium was established to create an efficient and sustainable infrastructure for rapid and integrated monitoring of post-approval benefit-risk of vaccines, including a code of conduct and governance principles for collaborative studies. The development of the code of conduct was guided by three core and common values (best science, strengthening public health, transparency) and a review of existing guidance and relevant published articles. The ADVANCE Code of Conduct includes 45 recommendations in 10 topics (Scientific integrity, Scientific independence, Transparency, Conflicts of interest, Study protocol, Study report, Publication, Subject privacy, Sharing of study data, Research contract). Each topic includes a definition, a set of recommendations and a list of additional reading. The concept of the study team is introduced as a key component of the ADVANCE Code of Conduct with a core set of roles and responsibilities. It is hoped that adoption of the ADVANCE Code of Conduct by all partners involved in a study will facilitate and speed-up its initiation, design, conduct and reporting. Adoption of the ADVANCE Code of Conduct should be stated in the study protocol, study report and publications and journal editors are encouraged to use it as an indication that good principles of public health, science and transparency were followed throughout the study. Copyright © 2017

  10. Code TESLA for Modeling and Design of High-Power High-Efficiency Klystrons

    DTIC Science & Technology

    2011-03-01

    CODE TESLA FOR MODELING AND DESIGN OF HIGH - POWER HIGH -EFFICIENCY KLYSTRONS * I.A. Chernyavskiy, SAIC, McLean, VA 22102, U.S.A. S.J. Cooke, B...and multiple-beam klystrons as high - power RF sources. These sources are widely used or proposed to be used in accelerators in the future. Comparison...of TESLA modelling results with experimental data for a few multiple-beam klystrons are shown. INTRODUCTION High - power and high -efficiency

  11. Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence

    NASA Astrophysics Data System (ADS)

    Lynn, Jacob William

    We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no

  12. Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.

    PubMed

    Prakash, R; Balaji Ganesh, A; Sivabalan, Somu

    2017-05-01

    The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.

  13. Comparison of slow and accelerated rehabilitation protocol after arthroscopic rotator cuff repair: pain and functional activity.

    PubMed

    Düzgün, Irem; Baltacı, Gül; Atay, O Ahmet

    2011-01-01

    In this study, we sought to compare the effects of the slow and accelerated protocols on pain and functional activity level after arthroscopic rotator cuff repair. The study included 29 patients (3 men, 26 women) who underwent arthroscopic repair of stage 2 and 3 rotator cuff tears. Patients were randomized in two groups: the accelerated protocol group (n=13) and slow protocol group (n=16). Patients in the accelerated protocol group participated in a preoperative rehabilitation program for 4-6 weeks. Patients were evaluated preoperatively and for 24 weeks postoperatively. Pain was assessed by visual analog scale, and functional activity level was assessed by The Disabilities of The Arm Shoulder and Hand (DASH) questionnaire. The active range of motion was initiated at week 3 after surgery for the accelerated rehabilitation protocol and at week 6 for the slow protocol. The rehabilitation program was completed by the 8th week with the accelerated protocol and by the 22nd week with the slow protocol. There was no significant difference between the slow and accelerated protocols with regard to pain at rest (p>0.05). However, the accelerated protocol was associated with less pain during activity at weeks 5 and 16, and with less pain at night during week 5 (p<0.05). The accelerated protocol was superior to the slow protocol in terms of functional activity level, as determined by DASH at weeks 8, 12, and 16 after surgery (p<0.05). The accelerated protocol is recommended to physical therapists during rehabilitation after arthroscopic rotator cuff repair to prevent the negative effects of immobilization and to support rapid reintegration to daily living activities.

  14. Laser-driven ion acceleration at BELLA

    NASA Astrophysics Data System (ADS)

    Bin, Jianhui; Steinke, Sven; Ji, Qing; Nakamura, Kei; Treffert, Franziska; Bulanov, Stepan; Roth, Markus; Toth, Csaba; Schroeder, Carl; Esarey, Eric; Schenkel, Thomas; Leemans, Wim

    2017-10-01

    BELLA is a high repetiton rate PW laser and we used it for high intensity laser plasma acceleration experiments. The BELLA-i program is focused on relativistic laser plasma interaction such as laser driven ion acceleration, aiming at establishing an unique collaborative research facility providing beam time to selected external groups for fundamental physics and advanced applications. Here we present our first parameter study of ion acceleration driven by the BELLA-PW laser with truly high repetition rate. The laser repetition rate of 1Hz allows for scanning the laser pulse duration, relative focus location and target thickness for the first time at laser peak powers of above 1 PW. Furthermore, the long focal length geometry of the experiment (f ∖65) and hence, large focus size provided ion beams of reduced divergence and unprecedented charge density. This work was supported by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  15. Measuring affiliation in group therapy for substance use disorders in the Women's Recovery Group study: Does it matter whether the group is all-women or mixed-gender?

    PubMed

    Sugarman, Dawn E; Wigderson, Sara B; Iles, Brittany R; Kaufman, Julia S; Fitzmaurice, Garrett M; Hilario, E Yvette; Robbins, Michael S; Greenfield, Shelly F

    2016-10-01

    A Stage II, two-site randomized clinical trial compared the manualized, single-gender Women's Recovery Group (WRG) to mixed-gender group therapy (Group Drug Counseling; GDC) and demonstrated efficacy. Enhanced affiliation and support in the WRG is a hypothesized mechanism of efficacy. This study sought to extend results of the previous small Stage I trial that showed the rate of supportive affiliative statements occurred more frequently in WRG than GDC. Participants (N = 158; 100 women, 58 men) were 18 years or older, substance dependent, and had used substances within the past 60 days. Women were randomized to WRG (n = 52) or GDC (n = 48). Group therapy videos were coded by two independent raters; Rater 1 coded 20% of videos (n = 74); Rater 2 coded 25% of videos coded by Rater 1 (n = 19). The number of affiliative statements made in WRG was 66% higher than in GDC. Three of eight affiliative statement categories occurred more frequently in WRG than GDC: supportive, shared experience, and strategy statements. This larger Stage II trial provided a greater number of group therapy tapes available for analysis. Results extended our previous findings, demonstrating both greater frequency of all affiliative statements, as well as specific categories of statements, made in single-gender WRG than mixed-gender GDC. Greater frequency of affiliative statements among group members may be one mechanism of enhanced support and efficacy in women-only WRG compared with standard mixed-gender group therapy for substance use disorders. (Am J Addict 2016;25:573-580). © 2016 American Academy of Addiction Psychiatry.

  16. Exclusively visual analysis of classroom group interactions

    NASA Astrophysics Data System (ADS)

    Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric

    2016-12-01

    Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.

  17. Self-complementary circular codes in coding theory.

    PubMed

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  18. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  19. Identifying Pediatric Severe Sepsis and Septic Shock: Accuracy of Diagnosis Codes.

    PubMed

    Balamuth, Fran; Weiss, Scott L; Hall, Matt; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Centkowski, Sierra; Baumer-Mouradian, Shannon; Weiser, Jason; Hayes, Katie; Shah, Samir S; Alpern, Elizabeth R

    2015-12-01

    To evaluate accuracy of 2 established administrative methods of identifying children with sepsis using a medical record review reference standard. Multicenter retrospective study at 6 US children's hospitals. Subjects were children >60 days to <19 years of age and identified in 4 groups based on International Classification of Diseases, Ninth Revision, Clinical Modification codes: (1) severe sepsis/septic shock (sepsis codes); (2) infection plus organ dysfunction (combination codes); (3) subjects without codes for infection, organ dysfunction, or severe sepsis; and (4) infection but not severe sepsis or organ dysfunction. Combination codes were allowed, but not required within the sepsis codes group. We determined the presence of reference standard severe sepsis according to consensus criteria. Logistic regression was performed to determine whether addition of codes for sepsis therapies improved case identification. A total of 130 out of 432 subjects met reference SD of severe sepsis. Sepsis codes had sensitivity 73% (95% CI 70-86), specificity 92% (95% CI 87-95), and positive predictive value 79% (95% CI 70-86). Combination codes had sensitivity 15% (95% CI 9-22), specificity 71% (95% CI 65-76), and positive predictive value 18% (95% CI 11-27). Slight improvements in model characteristics were observed when codes for vasoactive medications and endotracheal intubation were added to sepsis codes (c-statistic 0.83 vs 0.87, P = .008). Sepsis specific International Classification of Diseases, Ninth Revision, Clinical Modification codes identify pediatric patients with severe sepsis in administrative data more accurately than a combination of codes for infection plus organ dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Follow the Code: Rules or Guidelines for Academic Deans' Behavior?

    ERIC Educational Resources Information Center

    Bray, Nathaniel J.

    2012-01-01

    In the popular movie series "Pirates of the Caribbean," there is a pirate code that influences how pirates behave in unclear situations, with a running joke about whether the code is either a set of rules or guidelines for behavior. Codes of conduct in any social group or organization can have much the same feel; they can provide clarity and…

  1. Neck forces and moments and head accelerations in side impact.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Maiman, Dennis J; Philippens, Mat; Wismans, Jac

    2009-03-01

    Although side-impact sled studies have investigated chest, abdomen, and pelvic injury mechanics, determination of head accelerations and the associated neck forces and moments is very limited. The purpose of the present study was therefore to determine the temporal forces and moments at the upper neck region and head angular accelerations and angular velocities using postmortem human subjects (PMHS). Anthropometric data and X-rays were obtained, and the specimens were positioned upright on a custom-designed seat, rigidly fixed to the platform of the sled. PMHS were seated facing forward with the Frankfort plane horizontal, and legs were stretched parallel to the mid-sagittal plane. The normal curvature and alignment of the dorsal spine were maintained without initial torso rotation. A pyramid-shaped nine-accelerometer package was secured to the parietal-temporal region of the head. The test matrix consisted of groups A and B, representing the fully restrained torso condition, and groups C and D, representing the three-point belt-restrained torso condition. The change in velocity was 12.4 m/s for groups A and C, 17.9 m/s for group B, and 8.7 m/s for group D tests. Two specimens were tested in each group. Injuries were scored based on the Abbreviated Injury Scale. The head mass, center of gravity, and moment of inertia were determined for each specimen. Head accelerations and upper neck forces and moments were determined before head contact. Neck forces and moments and head angular accelerations and angular velocities are presented on a specimen-by-specimen basis. In addition, a summary of peak magnitudes of biomechanical data is provided because of their potential in serving as injury reference values characterizing head-neck biomechanics in side impacts. Though no skull fractures occurred, AIS 0 to 3 neck traumas were dependent on the impact velocity and restraint condition. Because specimen-specific head center of gravity and mass moment of inertia were determined

  2. Accelerations in Flight

    NASA Technical Reports Server (NTRS)

    Doolittle, J H

    1925-01-01

    This work on accelerometry was done at McCook Field for the purpose of continuing the work done by other investigators and obtaining the accelerations which occur when a high-speed pursuit airplane is subjected to the more common maneuvers. The accelerations obtained in suddenly pulling out of a dive with well-balanced elevators are shown to be within 3 or 4 per cent of the theoretically possible accelerations. The maximum acceleration which a pilot can withstand depends upon the length of time the acceleration is continued. It is shown that he experiences no difficulty under the instantaneous accelerations as high as 7.8 G., but when under accelerations in excess of 4.5 G., continued for several seconds, he quickly loses his faculties.

  3. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  4. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  5. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal

  6. Evaluation of proton cross-sections for radiation sources in the proton accelerator

    NASA Astrophysics Data System (ADS)

    Cho, Young-Sik; Lee, Cheol-Woo; Lee, Young-Ouk

    2007-08-01

    Proton Engineering Frontier Project (PEFP) is currently building a proton accelerator in Korea which consists of a proton linear accelerator with 100 MeV of energy, 20 mA of current and various particle beam facilities. The final goal of this project consists of the production of 1 GeV proton beams, which will be used for various medical and industrial applications as well as for research in basic and applied sciences. Carbon and copper in the proton accelerator for PEPP, through activation, become radionuclides such as 7Be and 64Cu. Copper is a major element of the accelerator components and the carbon is planned to be used as a target material of the beam dump. A recent survey showed that the currently available cross-sections create a large difference from the experimental data in the production of some residual nuclides by the proton-induced reactions for carbon and copper. To more accurately estimate the production of radioactive nuclides in the accelerator, proton cross-sections for carbon and copper are evaluated. The TALYS code was used for the evaluation of the cross-sections for the proton-induced reactions. To obtain the cross-sections which best fits the experimental data, optical model parameters for the neutron, proton and other complex particles such as the deuteron and alpha were successively adjusted. The evaluated cross-sections in this study are compared with the measurements and other evaluations .

  7. Characterization of the radiation environment at the UNLV accelerator facility during operation of the Varian M6 linac

    NASA Astrophysics Data System (ADS)

    Hodges, M.; Barzilov, A.; Chen, Y.; Lowe, D.

    2016-10-01

    The bremsstrahlung photon flux from the UNLV particle accelerator (Varian M6 model) was determined using MCNP5 code for 3 MeV and 6 MeV incident electrons. Human biological equivalent dose rates due to accelerator operation were evaluated using the photon flux with the flux-to-dose conversion factors. Dose rates were computed for the accelerator facility for M6 linac use under different operating conditions. The results showed that the use of collimators and linac internal shielding significantly reduced the dose rates throughout the facility. It was shown that the walls of the facility, in addition to the earthen berm enveloping the building, provide equivalent shielding to reduce dose rates outside to below the 2 mrem/h limit.

  8. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    DOE PAGES

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B.; ...

    2016-04-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case,more » finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.« less

  9. Solar Wind Acceleration: Modeling Effects of Turbulent Heating in Open Flux Tubes

    NASA Astrophysics Data System (ADS)

    Woolsey, Lauren N.; Cranmer, Steven R.

    2014-06-01

    We present two self-consistent coronal heating models that determine the properties of the solar wind generated and accelerated in magnetic field geometries that are open to the heliosphere. These models require only the radial magnetic field profile as input. The first code, ZEPHYR (Cranmer et al. 2007) is a 1D MHD code that includes the effects of turbulent heating created by counter-propagating Alfven waves rather than relying on empirical heating functions. We present the analysis of a large grid of modeled flux tubes (> 400) and the resulting solar wind properties. From the models and results, we recreate the observed anti-correlation between wind speed at 1 AU and the so-called expansion factor, a parameterization of the magnetic field profile. We also find that our models follow the same observationally-derived relation between temperature at 1 AU and wind speed at 1 AU. We continue our analysis with a newly-developed code written in Python called TEMPEST (The Efficient Modified-Parker-Equation-Solving Tool) that runs an order of magnitude faster than ZEPHYR due to a set of simplifying relations between the input magnetic field profile and the temperature and wave reflection coefficient profiles. We present these simplifying relations as a useful result in themselves as well as the anti-correlation between wind speed and expansion factor also found with TEMPEST. Due to the nature of the algorithm TEMPEST utilizes to find solar wind solutions, we can effectively separate the two primary ways in which Alfven waves contribute to solar wind acceleration: 1) heating the surrounding gas through a turbulent cascade and 2) providing a separate source of wave pressure. We intend to make TEMPEST easily available to the public and suggest that TEMPEST can be used as a valuable tool in the forecasting of space weather, either as a stand-alone code or within an existing modeling framework.

  10. Combinatorial neural codes from a mathematical coding theory perspective.

    PubMed

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  11. Training course on code implementation.

    PubMed

    Allain, A; De Arango, R

    1992-01-01

    The International Baby Food Action Network (IBFAN) is a coalition of over 40 citizen groups in 70 countries. IBFAN monitors the progress worldwide of the implementation of the International Code of Marketing of Breastmilk Substitutes. The Code is intended to regulate the advertising and promotional techniques used to sell infant formula. The 1991 IBFAN report shows that 75 countries have taken some action to implement the International Code. During 1992, the IBFAN Code Documentation Center in Malaysia conducted 2 training courses to help countries draft legislation to implement and monitor compliance with the International Code. In April, government officials from 19 Asian and African countries attended the first course in Malaysia; the second course was conducted in Spanish in Guatemala and attended by officials from 15 Latin American and Caribbean countries. The resource people included representatives from NGOs in Africa, Asia, Latin America, Europe and North America with experience in Code implementation and monitoring at the national level. The main purpose of each course was to train government officials to use the International Code as a starting point for national legislation to protect breastfeeding. Participants reviewed recent information on lactation management, the advantages of breastfeeding, current trends in breastfeeding and the marketing practices of infant formula manufacturers. The participants studied the terminology contained in the International Code and terminology used by infant formula manufacturers to include breastmilk supplements such as follow-on formulas and cereal-based baby foods. Relevant World Health Assembly resolutions such as the one adopted in 1986 on the need to ban free and low-cost supplies to hospitals were examined. The legal aspects of the current Baby Friendly Hospital Initiative (BFHI) and the progress in the 12 BFHI test countries concerning the elimination of supplies were also examined. International Labor

  12. Accelerator shield design of KIPT neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Gohar, Y.

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generatedmore » by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain

  13. The Effect of Acceleration Sprint and Zig-zag Drill Combination to Increase Students’ Speed and Agility

    NASA Astrophysics Data System (ADS)

    Bana, O.; Mintarto, E.; Kusnanik, N. W.

    2018-01-01

    The purpose of this research is to analyze the following factors: (1) how far the effect of exercise acceleration sprint on the speed and agility (2) how much influence the zig-zag drill combination to the speed and agility (3) and is there any difference between the effects of exercise acceleration sprint and practice zig-zag drill combination of the speed and agility. This research is quantitative with quasi-experimental approach. The design of this study is matching only design.This study was conducted on 33 male students who take part in extracurricular and divided into 3 groups with 11 students in each group. Group 1 was given training of acceleration sprint, group 2 was given zig-zag training combination drills of conventional and exercises for group 3, for 8 weeks. The data collection was using sprint 30 meter to test the speed and agility t-test to test agility. Data were analyzed using t-test and analysis of variance. The conclusion of the research is (1) there is a significant effect of exercise acceleration sprint for the speed and agility, (2) there is a significant influence combination zig-zag drills, on speed and agility (3) and exercise acceleration sprint have more effect on the speed and agility.

  14. Recombinant blood group proteins for use in antibody screening and identification tests.

    PubMed

    Seltsam, Axel; Blasczyk, Rainer

    2009-11-01

    The present review elucidates the potentials of recombinant blood group proteins (BGPs) for red blood cell (RBC) antibody detection and identification in pretransfusion testing and the achievements in this field so far. Many BGPs have been eukaryotically and prokaryotically expressed in sufficient quantity and quality for RBC antibody testing. Recombinant BGPs can be incorporated in soluble protein reagents or solid-phase assays such as ELISA, color-coded microsphere and protein microarray chip-based techniques. Because novel recombinant protein-based assays use single antigens, a positive reaction of a serum with the recombinant protein directly indicates the presence and specificity of the target antibody. Inversely, conventional RBC-based assays use panels of human RBCs carrying a huge number of blood group antigens at the same time and require negative reactions of samples with antigen-negative cells for indirect determination of antibody specificity. Because of their capacity for single-step, direct RBC antibody determination, recombinant protein-based assays may greatly facilitate and accelerate the identification of common and rare RBC antibodies.

  15. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  16. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  17. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  18. New technologies accelerate the exploration of non-coding RNAs in horticultural plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Degao; Mewalal, Ritesh; Hu, Rongbin

    Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants andmore » discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs.« less

  19. New technologies accelerate the exploration of non-coding RNAs in horticultural plants

    PubMed Central

    Liu, Degao; Mewalal, Ritesh; Hu, Rongbin; Tuskan, Gerald A; Yang, Xiaohan

    2017-01-01

    Non-coding RNAs (ncRNAs), that is, RNAs not translated into proteins, are crucial regulators of a variety of biological processes in plants. While protein-encoding genes have been relatively well-annotated in sequenced genomes, accounting for a small portion of the genome space in plants, the universe of plant ncRNAs is rapidly expanding. Recent advances in experimental and computational technologies have generated a great momentum for discovery and functional characterization of ncRNAs. Here we summarize the classification and known biological functions of plant ncRNAs, review the application of next-generation sequencing (NGS) technology and ribosome profiling technology to ncRNA discovery in horticultural plants and discuss the application of new technologies, especially the new genome-editing tool clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) systems, to functional characterization of plant ncRNAs. PMID:28698797

  20. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  1. Application of magnetically insulated transmission lines for high current, high voltage electron beam accelerators

    NASA Astrophysics Data System (ADS)

    Shope, S. L.; Mazarakis, M. G.; Frost, C. A.; Poukey, J. W.; Turman, B. N.

    Self Magnetically Insulated Transmission Lines (MITL) adders were used successfully in a number of Sandia accelerators such as HELIA, HERMES III, and SABRE. Most recently we used at MITL adder in the RADLAC/SMILE electron beam accelerator to produce high quality, small radius (r(sub rho) less than 2 cm), 11 - 15 MeV, 50 - 100-kA beams with a small transverse velocity v(perpendicular)/c = beta(perpendicular) less than or equal to 0.1. In RADLAC/SMILE, a coaxial MITL passed through the eight, 2 MV vacuum envelopes. The MITL summed the voltages of all eight feeds to a single foilless diode. The experimental results are in good agreement with code simulations. Our success with the MITL technology led us to investigate the application to higher energy accelerator designs. We have a conceptual design for a cavity-fed MITL that sums the voltages from 100 identical, inductively-isolated cavities. Each cavity is a toroidal structure that is driven simultaneously by four 8-ohm pulse-forming lines, providing a 1-MV voltage pulse to each of the 100 cavities. The point design accelerator is 100 MV, 500 kA, with a 30 - 50 ns FWHM output pulse.

  2. Short-Term Memory Coding in Children with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Henry, Lucy

    2008-01-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and…

  3. Use of the ETA-1 reactor for the validation of the multi-group APOLLO2-MORET 5 code and the Monte Carlo continuous energy MORET 5 code

    NASA Astrophysics Data System (ADS)

    Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.

    2014-06-01

    The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.

  4. Compact Plasma Accelerator

    NASA Technical Reports Server (NTRS)

    Foster, John E.

    2004-01-01

    A plasma accelerator has been conceived for both material-processing and spacecraft-propulsion applications. This accelerator generates and accelerates ions within a very small volume. Because of its compactness, this accelerator could be nearly ideal for primary or station-keeping propulsion for spacecraft having masses between 1 and 20 kg. Because this accelerator is designed to generate beams of ions having energies between 50 and 200 eV, it could also be used for surface modification or activation of thin films.

  5. Development of high intensity ion sources for a Tandem-Electrostatic-Quadrupole facility for Accelerator-Based Boron Neutron Capture Therapy.

    PubMed

    Bergueiro, J; Igarzabal, M; Sandin, J C Suarez; Somacal, H R; Vento, V Thatar; Huck, H; Valda, A A; Repetto, M; Kreiner, A J

    2011-12-01

    Several ion sources have been developed and an ion source test stand has been mounted for the first stage of a Tandem-Electrostatic-Quadrupole facility For Accelerator-Based Boron Neutron Capture Therapy. A first source, designed, fabricated and tested is a dual chamber, filament driven and magnetically compressed volume plasma proton ion source. A 4 mA beam has been accelerated and transported into the suppressed Faraday cup. Extensive simulations of the sources have been performed using both 2D and 3D self-consistent codes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Microelectromechanical acceleration-sensing apparatus

    DOEpatents

    Lee, Robb M [Albuquerque, NM; Shul, Randy J [Albuquerque, NM; Polosky, Marc A [Albuquerque, NM; Hoke, Darren A [Albuquerque, NM; Vernon, George E [Rio Rancho, NM

    2006-12-12

    An acceleration-sensing apparatus is disclosed which includes a moveable shuttle (i.e. a suspended mass) and a latch for capturing and holding the shuttle when an acceleration event is sensed above a predetermined threshold level. The acceleration-sensing apparatus provides a switch closure upon sensing the acceleration event and remains latched in place thereafter. Examples of the acceleration-sensing apparatus are provided which are responsive to an acceleration component in a single direction (i.e. a single-sided device) or to two oppositely-directed acceleration components (i.e. a dual-sided device). A two-stage acceleration-sensing apparatus is also disclosed which can sense two acceleration events separated in time. The acceleration-sensing apparatus of the present invention has applications, for example, in an automotive airbag deployment system.

  7. SHORT ACCELERATION TIMES FROM SUPERDIFFUSIVE SHOCK ACCELERATION IN THE HELIOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perri, S.; Zimbardo, G., E-mail: silvia.perri@fis.unical.it

    2015-12-10

    The analysis of time profiles of particles accelerated at interplanetary shocks allows particle transport properties to be inferred. The frequently observed power-law decay upstream, indeed, implies a superdiffusive particle transport when the level of magnetic field variance does not change as the time interval from the shock front increases. In this context, a superdiffusive shock acceleration (SSA) theory has been developed, allowing us to make predictions of the acceleration times. In this work we estimate for a number of interplanetary shocks, including the solar wind termination shock, the acceleration times for energetic protons in the framework of SSA and wemore » compare the results with the acceleration times predicted by standard diffusive shock acceleration. The acceleration times due to SSA are found to be much shorter than in the classical model, and also shorter than the interplanetary shock lifetimes. This decrease of the acceleration times is due to the scale-free nature of the particle displacements in the framework of superdiffusion. Indeed, very long displacements are possible, increasing the probability for particles far from the front of the shock to return, and short displacements have a high probability of occurrence, increasing the chances for particles close to the front to cross the shock many times.« less

  8. Palindromic repetitive DNA elements with coding potential in Methanocaldococcus jannaschii.

    PubMed

    Suyama, Mikita; Lathe, Warren C; Bork, Peer

    2005-10-10

    We have identified 141 novel palindromic repetitive elements in the genome of euryarchaeon Methanocaldococcus jannaschii. The total length of these elements is 14.3kb, which corresponds to 0.9% of the total genomic sequence and 6.3% of all extragenic regions. The elements can be divided into three groups (MJRE1-3) based on the sequence similarity. The low sequence identity within each of the groups suggests rather old origin of these elements in M. jannaschii. Three MJRE2 elements were located within the protein coding regions without disrupting the coding potential of the host genes, indicating that insertion of repeats might be a widespread mechanism to enhance sequence diversity in coding regions.

  9. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As

  10. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  11. Longitudinal gas-density profilometry for plasma-wakefield acceleration targets

    NASA Astrophysics Data System (ADS)

    Schaper, Lucas; Goldberg, Lars; Kleinwächter, Tobias; Schwinkendorf, Jan-Patrick; Osterhoff, Jens

    2014-03-01

    Precise tailoring of plasma-density profiles has been identified as one of the critical points in achieving stable and reproducible conditions in plasma wakefield accelerators. Here, the strict requirements of next generation plasma-wakefield concepts, such as hybrid-accelerators, with densities around 1017 cm-3 pose challenges to target fabrication as well as to their reliable diagnosis. To mitigate these issues we combine target simulation with fabrication and characterization. The resulting density profiles in capillaries with gas jet and multiple in- and outlets are simulated with the fluid code OpenFOAM. Satisfactory simulation results then are followed by fabrication of the desired target shapes with structures down to the 10 μm level. The detection of Raman scattered photons using lenses with large collection solid angle allows to measure the corresponding longitudinal density profiles at different number densities and allows a detection sensitivity down to the low 1017 cm-3 density range at high spatial resolution. This offers the possibility to gain insight into steep density gradients as for example in gas jets and at the plasma-to-vacuum transition.

  12. Feasibility of an XUV FEL Oscillator Driven by a SCRF Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Freund, H. P.; Reinsch, M.

    The Advanced Superconducting Test Accelerator (ASTA) facility is currently under construction at Fermi National Accelerator Laboratory. Using a1-ms-long macropulse composed of up to 3000 micropulses, and with beam energies projected from 45 to 800 MeV, the possibility for an extreme ultraviolet (XUV) free-electron laser oscillator (FELO) with the higher energy is evaluated. We have used both GINGER with an oscillator module and the MEDUSA/OPC code to assess FELO saturation prospects at 120 nm, 40 nm, and 13.4 nm. The results support saturation at all of these wavelengths which are also shorter than the demonstrated shortest wavelength record of 176 nmmore » from a storage-ring-based FELO. This indicates linac-driven FELOs can be extended into this XUV wavelength regime previously only reached with single-pass FEL configurations.« less

  13. Coronal tibial slope is associated with accelerated knee osteoarthritis: data from the Osteoarthritis Initiative.

    PubMed

    Driban, Jeffrey B; Stout, Alina C; Duryea, Jeffrey; Lo, Grace H; Harvey, William F; Price, Lori Lyn; Ward, Robert J; Eaton, Charles B; Barbe, Mary F; Lu, Bing; McAlindon, Timothy E

    2016-07-19

    Accelerated knee osteoarthritis may be a unique subset of knee osteoarthritis, which is associated with greater knee pain and disability. Identifying risk factors for accelerated knee osteoarthritis is vital to recognizing people who will develop accelerated knee osteoarthritis and initiating early interventions. The geometry of an articular surface (e.g., coronal tibial slope), which is a determinant of altered joint biomechanics, may be an important risk factor for incident accelerated knee osteoarthritis. We aimed to determine if baseline coronal tibial slope is associated with incident accelerated knee osteoarthritis or common knee osteoarthritis. We conducted a case-control study using data and images from baseline and the first 4 years of follow-up in the Osteoarthritis Initiative. We included three groups: 1) individuals with incident accelerated knee osteoarthritis, 2) individuals with common knee osteoarthritis progression, and 3) a control group with no knee osteoarthritis at any time. We did 1:1:1 matching for the 3 groups based on sex. Weight-bearing, fixed flexion posterior-anterior knee radiographs were obtained at each visit. One reader manually measured baseline coronal tibial slope on the radiographs. Baseline femorotibial angle was measured on the radiographs using a semi-automated program. To assess the relationship between slope (predictor) and incident accelerated knee osteoarthritis or common knee osteoarthritis (outcomes) compared with no knee osteoarthritis (reference outcome), we performed multinomial logistic regression analyses adjusted for sex. The mean baseline slope for incident accelerated knee osteoarthritis, common knee osteoarthritis, and no knee osteoarthritis were 3.1(2.0), 2.7(2.1), and 2.6(1.9); respectively. A greater slope was associated with an increased risk of incident accelerated knee osteoarthritis (OR = 1.15 per degree, 95 % CI = 1.01 to 1.32) but not common knee osteoarthritis (OR = 1.04, 95 % CI = 0

  14. Testing cosmic ray acceleration with radio relics: a high-resolution study using MHD and tracers

    NASA Astrophysics Data System (ADS)

    Wittor, D.; Vazza, F.; Brüggen, M.

    2017-02-01

    Weak shocks in the intracluster medium may accelerate cosmic-ray protons and cosmic-ray electrons differently depending on the angle between the upstream magnetic field and the shock normal. In this work, we investigate how shock obliquity affects the production of cosmic rays in high-resolution simulations of galaxy clusters. For this purpose, we performed a magnetohydrodynamical simulation of a galaxy cluster using the mesh refinement code ENZO. We use Lagrangian tracers to follow the properties of the thermal gas, the cosmic rays and the magnetic fields over time. We tested a number of different acceleration scenarios by varying the obliquity-dependent acceleration efficiencies of protons and electrons, and by examining the resulting hadronic γ-ray and radio emission. We find that the radio emission does not change significantly if only quasi-perpendicular shocks are able to accelerate cosmic-ray electrons. Our analysis suggests that radio-emitting electrons found in relics have been typically shocked many times before z = 0. On the other hand, the hadronic γ-ray emission from clusters is found to decrease significantly if only quasi-parallel shocks are allowed to accelerate cosmic ray protons. This might reduce the tension with the low upper limits on γ-ray emission from clusters set by the Fermi satellite.

  15. Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths

    NASA Astrophysics Data System (ADS)

    Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo

    2017-10-01

    Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 <<λp studies. We present the 3d version of a PGC solver in the massively parallel, fully relativistic PIC code OSIRIS. Further, a discussion and characterization of the validity of the PGC solver for injection schemes on the plasma scale lengths, such as down-ramp injection, magnetic injection and ionization injection, through parametric studies, full PIC simulations and theoretical scaling, is presented. This work was partially supported by Fundacao para a Ciencia e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.

  16. MABE multibeam accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasti, D.E.; Ramirez, J.J.; Coleman, P.D.

    1985-01-01

    The Megamp Accelerator and Beam Experiment (MABE) was the technology development testbed for the multiple beam, linear induction accelerator approach for Hermes III, a new 20 MeV, 0.8 MA, 40 ns accelerator being developed at Sandia for gamma-ray simulation. Experimental studies of a high-current, single-beam accelerator (8 MeV, 80 kA), and a nine-beam injector (1.4 MeV, 25 kA/beam) have been completed, and experiments on a nine-beam linear induction accelerator are in progress. A two-beam linear induction accelerator is designed and will be built as a gamma-ray simulator to be used in parallel with Hermes III. The MABE pulsed power systemmore » and accelerator for the multiple beam experiments is described. Results from these experiments and the two-beam design are discussed. 11 refs., 6 figs.« less

  17. Significance of acceleration period in a dynamic strength testing study.

    PubMed

    Chen, W L; Su, F C; Chou, Y L

    1994-06-01

    The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.

  18. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technologymore » is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc

  19. Probing electron acceleration and x-ray emission in laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thaury, C.; Ta Phuoc, K.; Corde, S.

    2013-06-15

    While laser-plasma accelerators have demonstrated a strong potential in the acceleration of electrons up to giga-electronvolt energies, few experimental tools for studying the acceleration physics have been developed. In this paper, we demonstrate a method for probing the acceleration process. A second laser beam, propagating perpendicular to the main beam, is focused on the gas jet few nanosecond before the main beam creates the accelerating plasma wave. This second beam is intense enough to ionize the gas and form a density depletion, which will locally inhibit the acceleration. The position of the density depletion is scanned along the interaction lengthmore » to probe the electron injection and acceleration, and the betatron X-ray emission. To illustrate the potential of the method, the variation of the injection position with the plasma density is studied.« less

  20. Improvement of Mishchenko's T-matrix code for absorbing particles.

    PubMed

    Moroz, Alexander

    2005-06-10

    The use of Gaussian elimination with backsubstitution for matrix inversion in scattering theories is discussed. Within the framework of the T-matrix method (the state-of-the-art code by Mishchenko is freely available at http://www.giss.nasa.gov/-crmim), it is shown that the domain of applicability of Mishchenko's FORTRAN 77 (F77) code can be substantially expanded in the direction of strongly absorbing particles where the current code fails to converge. Such an extension is especially important if the code is to be used in nanoplasmonic or nanophotonic applications involving metallic particles. At the same time, convergence can also be achieved for large nonabsorbing particles, in which case the non-Numerical Algorithms Group option of Mishchenko's code diverges. Computer F77 implementation of Mishchenko's code supplemented with Gaussian elimination with backsubstitution is freely available at http://www.wave-scattering.com.

  1. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  2. A Model of RHIC Using the Unified Accelerator Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat, F.; Tepikian, S.; Trahern, C. G.

    1998-01-01

    The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homo­geneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lat­tice description while assigning specificmore » characteristics to individual elements, such as measured field har­monics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator phys­ics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC spe­cific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less

  3. The impact of three discharge coding methods on the accuracy of diagnostic coding and hospital reimbursement for inpatient medical care.

    PubMed

    Tsopra, Rosy; Peckham, Daniel; Beirne, Paul; Rodger, Kirsty; Callister, Matthew; White, Helen; Jais, Jean-Philippe; Ghosh, Dipansu; Whitaker, Paul; Clifton, Ian J; Wyatt, Jeremy C

    2018-07-01

    Coding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement. Comparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder. 107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p < 0.0001) or coded from the discharge summary with medical support (70% vs 60% respectively, p < 0.0001). When compared with the gold standard, the percentage of incorrect HRGs was 42% for discharge summary alone, 31% for coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M. The accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Heavy ion linear accelerator for radiation damage studies of materials

    NASA Astrophysics Data System (ADS)

    Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.; Nolen, Jerry; Barcikowski, Albert; Pellin, Michael; Yacout, Abdellatif

    2017-03-01

    A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response of the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for 238U50+ and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.

  5. Heavy ion linear accelerator for radiation damage studies of materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.

    A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response ofmore » the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for U-238(50+) and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.« less

  6. Fundamental period of Italian reinforced concrete buildings: comparison between numerical, experimental and Italian code simplified values

    NASA Astrophysics Data System (ADS)

    Ditommaso, Rocco; Carlo Ponzo, Felice; Auletta, Gianluca; Iacovino, Chiara; Nigro, Antonella

    2015-04-01

    Aim of this study is a comparison among the fundamental period of reinforced concrete buildings evaluated using the simplified approach proposed by the Italian Seismic code (NTC 2008), numerical models and real values retrieved from an experimental campaign performed on several buildings located in Basilicata region (Italy). With the intention of proposing simplified relationships to evaluate the fundamental period of reinforced concrete buildings, scientists and engineers performed several numerical and experimental campaigns, on different structures all around the world, to calibrate different kind of formulas. Most of formulas retrieved from both numerical and experimental analyses provides vibration periods smaller than those suggested by the Italian seismic code. However, it is well known that the fundamental period of a structure play a key role in the correct evaluation of the spectral acceleration for seismic static analyses. Generally, simplified approaches impose the use of safety factors greater than those related to in depth nonlinear analyses with the aim to cover possible unexpected uncertainties. Using the simplified formula proposed by the Italian seismic code the fundamental period is quite higher than fundamental periods experimentally evaluated on real structures, with the consequence that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear and nonlinear seismic static analyses. Finally, the authors suggest a possible update of the Italian seismic code formula for the simplified estimation of the fundamental period of vibration of existing RC buildings, taking into account both elastic and inelastic structural behaviour and the interaction between structural and non-structural elements. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the

  7. An MCNP-based model for the evaluation of the photoneutron dose in high energy medical electron accelerators.

    PubMed

    Carinou, Eleutheria; Stamatelatos, Ion Evangelos; Kamenopoulou, Vassiliki; Georgolopoulou, Paraskevi; Sandilos, Panayotis

    The development of a computational model for the treatment head of a medical electron accelerator (Elekta/Philips SL-18) by the Monte Carlo code mcnp-4C2 is discussed. The model includes the major components of the accelerator head and a pmma phantom representing the patient body. Calculations were performed for a 14 MeV electron beam impinging on the accelerator target and a 10 cmx10 cm beam area at the isocentre. The model was used in order to predict the neutron ambient dose equivalent at the isocentre level and moreover the neutron absorbed dose distribution within the phantom. Calculations were validated against experimental measurements performed by gold foil activation detectors. The results of this study indicated that the equivalent dose at tissues or organs adjacent to the treatment field due to photoneutrons could be up to 10% of the total peripheral dose, for the specific accelerator characteristics examined. Therefore, photoneutrons should be taken into account when accurate dose calculations are required to sensitive tissues that are adjacent to the therapeutic X-ray beam. The method described can be extended to other accelerators and collimation configurations as well, upon specification of treatment head component dimensions, composition and nominal accelerating potential.

  8. Long-Term Effectiveness of Accelerated Hepatitis B Vaccination Schedule in Drug Users

    PubMed Central

    Shah, Dimpy P.; Grimes, Carolyn Z.; Nguyen, Anh T.; Lai, Dejian

    2015-01-01

    Objectives. We demonstrated the effectiveness of an accelerated hepatitis B vaccination schedule in drug users. Methods. We compared the long-term effectiveness of accelerated (0–1–2 months) and standard (0–1–6 months) hepatitis B vaccination schedules in preventing hepatitis B virus (HBV) infections and anti-hepatitis B (anti-HBs) antibody loss during 2-year follow-up in 707 drug users (HIV and HBV negative at enrollment and completed 3 vaccine doses) from February 2004 to October 2009. Results. Drug users in the accelerated schedule group had significantly lower HBV infection rates, but had a similar rate of anti-HBs antibody loss compared with the standard schedule group over 2 years of follow-up. No chronic HBV infections were observed. Hepatitis C positivity at enrollment and age younger than 40 years were independent risk factors for HBV infection and antibody loss, respectively. Conclusions. An accelerated vaccination schedule was more preferable than a standard vaccination schedule in preventing HBV infections in drug users. To overcome the disadvantages of a standard vaccination schedule, an accelerated vaccination schedule should be considered in drug users with low adherence. Our study should be repeated in different cohorts to validate our findings and establish the role of an accelerated schedule in hepatitis B vaccination guidelines for drug users. PMID:25880946

  9. Annual Coded Wire Tag Program; Oregon Missing Production Groups, 1995 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrison, Robert L.; Mallette, Christine; Lewis, Mark A.

    1995-12-01

    Bonneville Power Administration is the funding source for the Oregon Department of Fish and Wildlife`s Annual Coded Wire Tag Program - Oregon Missing Production Groups Project. Tule brood fall chinook were caught primarily in the British Columbia, Washington and northern Oregon ocean commercial fisheries. The up-river bright fall chinook contributed primarily to the Alaska and British Columbia ocean commercial fisheries and the Columbia River gillnet fishery. Contribution of Rogue fall chinook released in the lower Columbia River system occurred primarily in the Oregon ocean commercial and Columbia river gillnet fisheries Willamette spring chinook salmon contributed primarily to the Alaska andmore » British Columbia ocean commercial, Oregon freshwater sport and Columbia River gillnet fisheries. Restricted ocean sport and commercial fisheries limited contribution of the Columbia coho released in the Umatilla River that survived at an average rate of 1.05% and contributed primarily to the Washington, Oregon and California ocean sport and commercial fisheries and the Columbia River gillnet fishery. The 1987 to 1991 brood years of coho released in the Yakima River survived at an average rate of 0.64% and contributed primarily to the Washington, Oregon and California ocean sport and commercial fisheries and the Columbia River gillnet fishery. Survival rates of salmon and steelhead are influenced, not only by factors in the hatchery, disease, density, diet and size and time of release, but also by environmental factors in the river and ocean. These environmental factors are controlled by large scale weather patterns such as El Nino over which man has no influence. Man could have some influence over river flow conditions, but political and economic pressures generally out weigh the biological needs of the fish.« less

  10. Effects of positive acceleration on the metabolism of endogenous carbon monoxide and serum lipid in atherosclerotic rabbits

    PubMed Central

    Luo, Huilan; Chen, Yongsheng; Wang, Junhua

    2010-01-01

    Background: Atherosclerosis (AS) is caused mainly due to the increase in the serum lipid, thrombosis, and injuries of the endothelial cells. During aviation, the incremental load of positive acceleration that leads to dramatic stress reactions and hemodynamic changes may predispose pilots to functional disorders and even pathological changes of organs. However, much less is known on the correlation between aviation and AS pathogenesis. Methods and Results: A total of 32 rabbits were randomly divided into 4 groups with 8 rabbits in each group. The control group was given a high cholesterol diet but no acceleration exposure, whereas the other 3 experimental groups were treated with a high cholesterol diet and acceleration exposure for 4, 8, and 12 weeks, respectively. In each group, samples of celiac vein blood and the aorta were collected after the last exposure for the measurement of endogenous CO and HO-1 activities, as well as the levels of total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C). As compared with the control group, the endocardial CO content and the HO-1 activity in aortic endothelial cells were significantly elevated at the 4th, 8th, and 12th weekend, respectively (P < 0.05 or <0.01). And these measures tended upward as the exposure time was prolonged. Levels of TC and LDL-C in the experimental groups were significantly higher than those in the control group, presenting an upward tendency. Levels of TG were found significantly increased in the 8-week-exposure group, but significantly declined in the 12-week-exposure group (still higher than those in the control group). Levels of the HDL-C were increased in the 4-week-exposure group, declined in the 8-week-exposure group, and once more increased in the 12-week-exposure group, without significant differences with the control group. Conclusions: Positive acceleration exposure may lead to a significant increase of

  11. An investigation into the effectiveness of smartphone experiments on students’ conceptual knowledge about acceleration

    NASA Astrophysics Data System (ADS)

    Mazzella, Alessandra; Testa, Italo

    2016-09-01

    This study is a first attempt to investigate effectiveness of smartphone-based activities on students’ conceptual understanding of acceleration. 143 secondary school students (15-16 years old) were involved in two types of activities: smartphone- and non-smartphone activities. The latter consisted in data logging and ‘cookbook’ activities. For the sake of comparison, all activities featured the same phenomena, i.e., the motion on an inclined plane and pendulum oscillations. A pre-post design was adopted, using open questionnaires as probes. Results show only weak statistical differences between the smartphone and non-smartphone groups. Students who followed smartphone activities were more able to design an experiment to measure acceleration and to correctly describe acceleration in a free fall motion. However, students of both groups had many difficulties in drawing acceleration vector along the trajectory of the studied motion. Results suggest that smartphone-based activities may be effective substitutes of traditional experimental settings and represent a valuable aid for teachers who want to implement laboratory activities at secondary school level. However, to achieve a deeper conceptual understanding of acceleration, some issues need to be addressed: what is the reference system of the built-in smartphone sensor; relationships between smartphone acceleration graphs and experimental setup; vector representation of the measured acceleration.

  12. Many human accelerated regions are developmental enhancers

    PubMed Central

    Capra, John A.; Erwin, Genevieve D.; McKinsey, Gabriel; Rubenstein, John L. R.; Pollard, Katherine S.

    2013-01-01

    The genetic changes underlying the dramatic differences in form and function between humans and other primates are largely unknown, although it is clear that gene regulatory changes play an important role. To identify regulatory sequences with potentially human-specific functions, we and others used comparative genomics to find non-coding regions conserved across mammals that have acquired many sequence changes in humans since divergence from chimpanzees. These regions are good candidates for performing human-specific regulatory functions. Here, we analysed the DNA sequence, evolutionary history, histone modifications, chromatin state and transcription factor (TF) binding sites of a combined set of 2649 non-coding human accelerated regions (ncHARs) and predicted that at least 30% of them function as developmental enhancers. We prioritized the predicted ncHAR enhancers using analysis of TF binding site gain and loss, along with the functional annotations and expression patterns of nearby genes. We then tested both the human and chimpanzee sequence for 29 ncHARs in transgenic mice, and found 24 novel developmental enhancers active in both species, 17 of which had very consistent patterns of activity in specific embryonic tissues. Of these ncHAR enhancers, five drove expression patterns suggestive of different activity for the human and chimpanzee sequence at embryonic day 11.5. The changes to human non-coding DNA in these ncHAR enhancers may modify the complex patterns of gene expression necessary for proper development in a human-specific manner and are thus promising candidates for understanding the genetic basis of human-specific biology. PMID:24218637

  13. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  14. Observations of Adolescent Peer Group Interactions as a Function of Within- and Between-Group Centrality Status

    ERIC Educational Resources Information Center

    Ellis, Wendy E.; Dumas, Tara M.; Mahdy, Jasmine C.; Wolfe, David A.

    2012-01-01

    Observations of adolescent (n = 258; M age = 15.45) peer group triads (n = 86) were analyzed to identify conversation and interaction styles as a function of within-group and between-group centrality status. Group members' discussions about hypothetical dilemmas were coded for agreements, disagreements, commands, and opinions. Interactions during…

  15. Kinetic Modeling of Radiative Turbulence in Relativistic Astrophysical Plasmas: Particle Acceleration and High-Energy Flares

    NASA Astrophysics Data System (ADS)

    Uzdensky, Dmitri

    Relativistic astrophysical plasma environments routinely produce intense high-energy emission, which is often observed to be nonthermal and rapidly flaring. The recently discovered gamma-ray (> 100 MeV) flares in Crab Pulsar Wind Nebula (PWN) provide a quintessential illustration of this, but other notable examples include relativistic active galactic nuclei (AGN) jets, including blazars, and Gamma-ray Bursts (GRBs). Understanding the processes responsible for the very efficient and rapid relativistic particle acceleration and subsequent emission that occurs in these sources poses a strong challenge to modern high-energy astrophysics, especially in light of the necessity to overcome radiation reaction during the acceleration process. Magnetic reconnection and collisionless shocks have been invoked as possible mechanisms. However, the inferred extreme particle acceleration requires the presence of coherent electric-field structures. How such large-scale accelerating structures (such as reconnecting current sheets) can spontaneously arise in turbulent astrophysical environments still remains a mystery. The proposed project will conduct a first-principles computational and theoretical study of kinetic turbulence in relativistic collisionless plasmas with a special focus on nonthermal particle acceleration and radiation emission. The main computational tool employed in this study will be the relativistic radiative particle-in-cell (PIC) code Zeltron, developed by the team members at the Univ. of Colorado. This code has a unique capability to self-consistently include the synchrotron and inverse-Compton radiation reaction force on the relativistic particles, while simultaneously computing the resulting observable radiative signatures. This proposal envisions performing massively parallel, large-scale three-dimensional simulations of driven and decaying kinetic turbulence in physical regimes relevant to real astrophysical systems (such as the Crab PWN), including the

  16. Emittance Growth in the DARHT-II Linear Induction Accelerator

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; McCuistian, B. Trent; Mostrom, Christopher B.; Schulze, Martin E.; Thoma, Carsten H.

    2017-11-01

    The Dual-Axis Radiographic Hydrotest (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. Some of the possible causes for the emittance growth in the DARHT LIA have been investigated using particle-in-cell (PIC) codes, and are discussed in this article. The results suggest that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.

  17. Investigation of advanced propulsion technologies: The RAM accelerator and the flowing gas radiation heater

    NASA Technical Reports Server (NTRS)

    Bruckner, A. P.; Knowlen, C.; Mattick, A. T.; Hertzberg, A.

    1992-01-01

    The two principal areas of advanced propulsion investigated are the ram accelerator and the flowing gas radiation heater. The concept of the ram accelerator is presented as a hypervelocity launcher for large-scale aeroballistic range applications in hypersonics and aerothermodynamics research. The ram accelerator is an in-bore ramjet device in which a projectile shaped like the centerbody of a supersonic ramjet is propelled in a stationary tube filled with a tailored combustible gas mixture. Combustion on and behind the projectile generates thrust which accelerates it to very high velocities. The acceleration can be tailored for the 'soft launch' of instrumented models. The distinctive reacting flow phenomena that have been observed in the ram accelerator are relevant to the aerothermodynamic processes in airbreathing hypersonic propulsion systems and are useful for validating sophisticated CFD codes. The recently demonstrated scalability of the device and the ability to control the rate of acceleration offer unique opportunities for the use of the ram accelerator as a large-scale hypersonic ground test facility. The flowing gas radiation receiver is a novel concept for using solar energy to heat a working fluid for space power or propulsion. Focused solar radiation is absorbed directly in a working gas, rather than by heat transfer through a solid surface. Previous theoretical analysis had demonstrated that radiation trapping reduces energy loss compared to that of blackbody receivers, and enables higher efficiencies and higher peak temperatures. An experiment was carried out to measure the temperature profile of an infrared-active gas and demonstrate the effect of radiation trapping. The success of this effort validates analytical models of heat transfer in this receiver, and confirms the potential of this approach for achieving high efficiency space power and propulsion.

  18. Transport, Acceleration and Spatial Access of Solar Energetic Particles

    NASA Astrophysics Data System (ADS)

    Borovikov, D.; Sokolov, I.; Effenberger, F.; Jin, M.; Gombosi, T. I.

    2017-12-01

    Solar Energetic Particles (SEPs) are a major branch of space weather. Often driven by Coronal Mass Ejections (CMEs), SEPs have a very high destructive potential, which includes but is not limited to disrupting communication systems on Earth, inflicting harmful and potentially fatal radiation doses to crew members onboard spacecraft and, in extreme cases, to people aboard high altitude flights. However, currently the research community lacks efficient tools to predict such hazardous SEP events. Such a tool would serve as the first step towards improving humanity's preparedness for SEP events and ultimately its ability to mitigate their effects. The main goal of the presented research is to develop a computational tool that provides the said capabilities and meets the community's demand. Our model has the forecasting capability and can be the basis for operational system that will provide live information on the current potential threats posed by SEPs based on observations of the Sun. The tool comprises several numerical models, which are designed to simulate different physical aspects of SEPs. The background conditions in the interplanetary medium, in particular, the Coronal Mass Ejection driving the particle acceleration, play a defining role and are simulated with the state-of-the-art MHD solver, Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme (BATS-R-US). The newly developed particle code, Multiple-Field-Line-Advection Model for Particle Acceleration (M-FLAMPA), simulates the actual transport and acceleration of SEPs and is coupled to the MHD code. The special property of SEPs, the tendency to follow magnetic lines of force, is fully taken advantage of in the computational model, which substitutes a complicated 3-D model with a multitude of 1-D models. This approach significantly simplifies computations and improves the time performance of the overall model. Also, it plays an important role of mapping the affected region by connecting it with the origin of

  19. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  20. Accelerated corneal crosslinking concurrent with laser in situ keratomileusis.

    PubMed

    Celik, H Ugur; Alagöz, Nese; Yildirim, Yusuf; Agca, Alper; Marshall, John; Demirok, Ahmet; Yilmaz, Omer Faruk

    2012-08-01

    To assess accelerated corneal collagen crosslinking (CXL) applied concurrently with laser in situ keratomileusis (LASIK) in a small group of patients. Beyoglu Eye Research and Training Hospital, Istanbul, Turkey. Prospective pilot interventional case series. In May 2010, patients had LASIK with concurrent accelerated CXL in 1 eye and LASIK only in the fellow eye to treat myopia or myopic astigmatism. The follow-up was 12 months. The attempted correction (spherical equivalent) ranged from -5.00 to -8.50 diopters (D) in the LASIK-CXL group and from -3.00 to -7.25 D in the LASIK-only group. Main outcome measures were manifest refraction, uncorrected (UDVA) and corrected (CDVA) distance visual acuities, and the endothelial cell count. Eight eyes of 3 women and 1 man (age 22 to 39 years old) were enrolled. At the 12-month follow-up, the LASIK-CXL group had a UDVA and manifest refraction equal to or better than those in the LASIK-only group. No eye lost 1 or more lines of CDVA at the final visit. The endothelial cell loss in the LASIK-CXL eye was not greater than in the fellow eye. No side effects were associated with either procedure. Laser in situ keratomileusis with accelerated CXL appears to be a promising modality for future applications to prevent corneal ectasia after LASIK treatment. The results in this pilot series suggest that evaluation of a larger study cohort is warranted. Drs. Yilmaz and Marshall are paid consultants to Avedro, Inc. No other author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. GALARIO: a GPU accelerated library for analysing radio interferometer observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2018-06-01

    We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.

  2. Accelerated Learning: Undergraduate Research Experiences at the Texas A&M Cyclotron Institute

    NASA Astrophysics Data System (ADS)

    Yennello, S. J.

    The Texas A&M Cyclotron Institute (TAMU CI) has had an NSF funded Research Experiences for Undergraduates program since 2004. Each summer about a dozen students from across the country join us for the 10-week program. They are each imbedded in one of the research groups of the TAMU CI and given their own research project. While the main focus of their effort is their individual research project, we also have other activities to broaden their experience. For instance, one of those activities has been involvement in a dedicated group experiment. Because not every experimental group will run during those 10 weeks and the fact that some of the students are in theory research groups, a group research experience allows everyone to actually be involved in an experiment using the accelerator. In stark contrast to the REU students' very focused experience during the summer, Texas A&M undergraduates can be involved in research projects at the Cyclotron throughout the year, often for multiple years. This extended exposure enables Texas A&M students to have a learning experience that cannot be duplicated without a local accelerator. The motivation for the REU program was to share this accelerator experience with students who do not have that opportunity at their home institution.

  3. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  4. Flow-accelerated corrosion 2016 international conference

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Shipkov, A. A.

    2017-05-01

    The paper discusses materials and results of the most representative world forum on the problems of flow-accelerated metal corrosion in power engineering—Flow-Accelerated Corrosion (FAC) 2016, the international conference, which was held in Lille (France) from May 23 through May 27, 2016, sponsored by EdF-DTG with the support of the International Atomic Energy Agency (IAEA) and the World Association of Nuclear Operators (WANO). The information on major themes of reports and materials of the exhibition arranged within the framework of the congress is presented. The statistics on operation time and intensity of FAC wall thinning of NPP pipelines and equipment in the world is set out. The paper describes typical examples of flow-accelerated corrosion damage of condensate-feed and wet-steam pipeline components of nuclear and thermal power plants that caused forced shutdowns or accidents. The importance of research projects on the problem of flow-accelerated metal corrosion of nuclear power units coordinated by the IAEA with the participation of leading experts in this field from around the world is considered. The reports presented at the conference considered issues of implementation of an FAC mechanism in single- and two-phase flows, the impact of hydrodynamic and water-chemical factors, the chemical composition of the metal, and other parameters on the intensity and location of FAC wall thinning localized areas in pipeline components and power equipment. Features and patterns of local and general FAC leading to local metal thinning and contamination of the working environment with ferriferous compounds are considered. Main trends of modern practices preventing FAC wear of NPP pipelines and equipment are defined. An increasing role of computer codes for the assessment and prediction of FAC rate, as well as software systems of support of the NPP personnel for the inspection planning and prevention of FAC wall thinning of equipment operating in singleand two

  5. Semiconductor acceleration sensor

    NASA Astrophysics Data System (ADS)

    Ueyanagi, Katsumichi; Kobayashi, Mitsuo; Goto, Tomoaki

    1996-09-01

    This paper reports a practical semiconductor acceleration sensor especially suited for automotive air bag systems. The acceleration sensor includes four beams arranged in a swastika structure. Two piezoresistors are formed on each beam. These eight piezoresistors constitute a Wheatstone bridge. The swastika structure of the sensing elements, an upper glass plate and a lower glass plate exhibit the squeeze film effect which enhances air dumping, by which the constituent silicon is prevented from breakdown. The present acceleration sensor has the following features. The acceleration force component perpendicular to the sensing direction can be cancelled. The cross-axis sensitivity is less than 3 percent. And, the erroneous offset caused by the differences between the thermal expansion coefficients of the constituent materials can be canceled. The high aspect ratio configuration realized by plasma etching facilitates reducing the dimensions and improving the sensitivity of the acceleration sensor. The present acceleration sensor is 3.9 mm by 3.9 mm in area and 1.2 mm in thickness. The present acceleration sensor can measure from -50 to +50 G with sensitivity of 0.275 mV/G and with non-linearity of less than 1 percent. The acceleration sensor withstands shock of 3000 G.

  6. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    PubMed

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource

  7. Coding and Billing in Surgical Education: A Systems-Based Practice Education Program.

    PubMed

    Ghaderi, Kimeya F; Schmidt, Scott T; Drolet, Brian C

    Despite increased emphasis on systems-based practice through the Accreditation Council for Graduate Medical Education core competencies, few studies have examined what surgical residents know about coding and billing. We sought to create and measure the effectiveness of a multifaceted approach to improving resident knowledge and performance of documenting and coding outpatient encounters. We identified knowledge gaps and barriers to documentation and coding in the outpatient setting. We implemented a series of educational and workflow interventions with a group of 12 residents in a surgical clinic at a tertiary care center. To measure the effect of this program, we compared billing codes for 1 year before intervention (FY2012) to prospectively collected data from the postintervention period (FY2013). All related documentation and coding were verified by study-blinded auditors. Interventions took place at the outpatient surgical clinic at Rhode Island Hospital, a tertiary-care center. A cohort of 12 plastic surgery residents ranging from postgraduate year 2 through postgraduate year 6 participated in the interventional sequence. A total of 1285 patient encounters in the preintervention group were compared with 1170 encounters in the postintervention group. Using evaluation and management codes (E&M) as a measure of documentation and coding, we demonstrated a significant and durable increase in billing with supporting clinical documentation after the intervention. For established patient visits, the monthly average E&M code level increased from 2.14 to 3.05 (p < 0.01); for new patients the monthly average E&M level increased from 2.61 to 3.19 (p < 0.01). This study describes a series of educational and workflow interventions, which improved resident coding and billing of outpatient clinic encounters. Using externally audited coding data, we demonstrate significantly increased rates of higher complexity E&M coding in a stable patient population based on improved

  8. Accelerator science and technology in Europe: EuCARD 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the third annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.

  9. Using the FLUKA Monte Carlo Code to Simulate the Interactions of Ionizing Radiation with Matter to Assist and Aid Our Understanding of Ground Based Accelerator Testing, Space Hardware Design, and Secondary Space Radiation Environments

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2015-01-01

    Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.

  10. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  11. Accuracy of ringless casting and accelerated wax-elimination technique: a comparative in vitro study.

    PubMed

    Prasad, Rahul; Al-Keraif, Abdulaziz Abdullah; Kathuria, Nidhi; Gandhi, P V; Bhide, S V

    2014-02-01

    The purpose of this study was to determine whether the ringless casting and accelerated wax-elimination techniques can be combined to offer a cost-effective, clinically acceptable, and time-saving alternative for fabricating single unit castings in fixed prosthodontics. Sixty standardized wax copings were fabricated on a type IV stone replica of a stainless steel die. The wax patterns were divided into four groups. The first group was cast using the ringless investment technique and conventional wax-elimination method; the second group was cast using the ringless investment technique and accelerated wax-elimination method; the third group was cast using the conventional metal ring investment technique and conventional wax-elimination method; the fourth group was cast using the metal ring investment technique and accelerated wax-elimination method. The vertical marginal gap was measured at four sites per specimen, using a digital optical microscope at 100× magnification. The results were analyzed using two-way ANOVA to determine statistical significance. The vertical marginal gaps of castings fabricated using the ringless technique (76.98 ± 7.59 μm) were significantly less (p < 0.05) than those castings fabricated using the conventional metal ring technique (138.44 ± 28.59 μm); however, the vertical marginal gaps of the conventional (102.63 ± 36.12 μm) and accelerated wax-elimination (112.79 ± 38.34 μm) castings were not statistically significant (p > 0.05). The ringless investment technique can produce castings with higher accuracy and can be favorably combined with the accelerated wax-elimination method as a vital alternative to the time-consuming conventional technique of casting restorations in fixed prosthodontics. © 2013 by the American College of Prosthodontists.

  12. Signature energetic analysis of accelerate electron beam after first acceleration station by accelerating stand of Joint Institute for Nuclear Research

    NASA Astrophysics Data System (ADS)

    Sledneva, A. S.; Kobets, V. V.

    2017-06-01

    The linear electron accelerator based on the LINAC - 800 accelerator imported from the Netherland is created at Joint Institute for Nuclear Research in the framework of the project on creation of the Testbed with an electron beam of a linear accelerator with an energy up to 250 MV. Currently two accelerator stations with a 60 MV energy of a beam are put in operation and the work is to put the beam through accelerating section of the third accelerator station. The electron beam with an energy of 23 MeV is used for testing the crystals (BaF2, CsI (native), and LYSO) in order to explore the opportunity to use them in particle detectors in experiments: Muon g-2, Mu2e, Comet, whose preparation requires a detailed study of the detectors properties such as their irradiation by the accelerator beams.

  13. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  14. PHAZR: A phenomenological code for holeboring in air

    NASA Astrophysics Data System (ADS)

    Picone, J. M.; Boris, J. P.; Lampe, M.; Kailasanath, K.

    1985-09-01

    This report describes a new code for studying holeboring by a charged particle beam, laser, or electric discharge in a gas. The coordinates which parameterize the channel are radial displacement (r) from the channel axis and distance (z) along the channel axis from the energy source. The code is primarily phenomenological that is, we use closed solutions of simple models in order to represent many of the effects which are important in holeboring. The numerical simplicity which we gain from the use of these solutions enables us to estimate the structure of channel over long propagation distances while using a minimum of computer time. This feature makes PHAZR a useful code for those studying and designing future systems. Of particular interest is the design and implementation of the subgrid turbulence model required to compute the enhanced channel cooling caused by asymmetry-driven turbulence. The approximate equations of Boris and Picone form the basis of the model which includes the effects of turbulent diffusion and fluid transport on the turbulent field itself as well as on the channel parameters. The primary emphasis here is on charged particle beams, and as an example, we present typical results for an ETA-like beam propagating in air. These calculations demonstrate how PHAZAR may be used to investigate accelerator parameter space and to isolate the important physical parameters which determine the holeboring properties of a given system. The comparison with two-dimensional calculations provide a calibration of the subgrid turbulence model.

  15. FERMILAB ACCELERATOR R&D PROGRAM TOWARDS INTENSITY FRONTIER ACCELERATORS : STATUS AND PROGRESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiltsev, Vladimir

    2016-11-15

    The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centrepiece of the US domestic HEP program at Fermilab. Operation, upgrade and development of the accelerators for the near- term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators and present its status and progress. INTENSITY FRONTIER ACCELERATORS

  16. OPserver: opacities and radiative accelerations on demand

    NASA Astrophysics Data System (ADS)

    Mendoza, C.; González, J.; Seaton, M. J.; Buerger, P.; Bellorín, A.; Meléndez, M.; Rodríguez, L. S.; Delahaye, F.; Zeippen, C. J.; Palacios, E.; Pradhan, A. K.

    2009-05-01

    We report on developments carried out within the Opacity Project (OP) to upgrade atomic database services to comply with e-infrastructure requirements. We give a detailed description of an interactive, online server for astrophysical opacities, referred to as OPserver, to be used in sophisticated stellar modelling where Rosseland mean opacities and radiative accelerations are computed at every depth point and each evolution cycle. This is crucial, for instance, in chemically peculiar stars and in the exploitation of the new asteroseismological data. OPserver, downloadable with the new OPCD_3.0 release from the Centre de Données Astronomiques de Strasbourg, France, computes mean opacities and radiative data for arbitrary chemical mixtures from the OP monochromatic opacities. It is essentially a client-server network restructuring and optimization of the suite of codes included in the earlier OPCD_2.0 release. The server can be installed locally or, alternatively, accessed remotely from the Ohio Supercomputer Center, Columbus, Ohio, USA. The client is an interactive web page or a subroutine library that can be linked to the user code. The suitability of this scheme in grid computing environments is emphasized, and its extension to other atomic database services for astrophysical purposes is discussed.

  17. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  18. Baby milk companies accused of breaching marketing code.

    PubMed

    Wise, J

    1997-01-18

    A consortium of 27 religious and health organizations has released a report entitled "Cracking the Code," which criticizes the bottle-feeding marketing techniques used by Nestle, Gerber, Mead Johnson, Wyeth, and Nutricia. Research for the report was carried out in Thailand, Bangladesh, South Africa, and Poland using a random sample of 800 mothers and 120 health workers in each country. In all 4 sites, women had received information that violated the World Health Organization's 1981 international code of marketing breast milk substitutes. Violations included promoting artificial feeding without recognizing breast feeding as the best source of infant nutrition. The investigation also found that women and health workers in all 4 sites received free samples of artificial milk. The report includes detailed examples of manufacturer representatives making unrequested visits to give product information to mothers, providing incentives to health workers to promote products, and promoting products outside of health care facilities. While the International Association of Infant Food Manufacturers condemned the study as biased, the Nestle company promised to review the allegations contained in the report and to deal with any breaches in the code. The Interagency Group on Breastfeeding Monitoring, which prepared the report, was created in 1994 to provide data to groups supporting a boycott of Nestle for code violations.

  19. Development of a GPU-Accelerated 3-D Full-Wave Code for Electromagnetic Wave Propagation in a Cold Plasma

    NASA Astrophysics Data System (ADS)

    Woodbury, D.; Kubota, S.; Johnson, I.

    2014-10-01

    Computer simulations of electromagnetic wave propagation in magnetized plasmas are an important tool for both plasma heating and diagnostics. For active millimeter-wave and microwave diagnostics, accurately modeling the evolution of the beam parameters for launched, reflected or scattered waves in a toroidal plasma requires that calculations be done using the full 3-D geometry. Previously, we reported on the application of GPGPU (General-Purpose computing on Graphics Processing Units) to a 3-D vacuum Maxwell code using the FDTD (Finite-Difference Time-Domain) method. Tests were done for Gaussian beam propagation with a hard source antenna, utilizing the parallel processing capabilities of the NVIDIA K20M. In the current study, we have modified the 3-D code to include a soft source antenna and an induced current density based on the cold plasma approximation. Results from Gaussian beam propagation in an inhomogeneous anisotropic plasma, along with comparisons to ray- and beam-tracing calculations will be presented. Additional enhancements, such as advanced coding techniques for improved speedup, will also be investigated. Supported by U.S. DoE Grant DE-FG02-99-ER54527 and in part by the U.S. DoE, Office of Science, WDTS under the Science Undergraduate Laboratory Internship program.

  20. Ideas for Advancing Code Sharing: A Different Kind of Hack Day

    NASA Astrophysics Data System (ADS)

    Teuben, P.; Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Wallin, J. F.

    2014-05-01

    How do we as a community encourage the reuse of software for telescope operations, data processing, and ? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.