Sample records for cern computing colloquia

  1. USNO Scientific Colloquia - Naval Oceanography Portal

    Science.gov Websites

    section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You Navigation Tour Information USNO Scientific Colloquia Info USNO Scientific Colloquia Time and Place: Unless departure. Add additional time prior to arriving at the colloquium for issuance of a visitors badge and

  2. Colloquia | Cancer Prevention Fellowship Program

    Cancer.gov

    The Cancer Prevention and Control Colloquia Series features current research findings from guest researchers from a variety of research disciplines. Topics cover current challenges and methods used by investigators to address gaps, advance the field of cancer prevention and control, and promote the application of successful strategies. When Usually twice a month on Tuesday from 11:00 am - noon between September and June. Where NCI's Shady Grove Campus, 9609 Medical Center Drive, Rockville, MD (conference room locations vary).

  3. Changing the science education paradigm: from teaching facts to engaging the intellect: Science Education Colloquia Series, Spring 2011.

    PubMed

    Fischer, Caleb Nathaniel

    2011-09-01

    Dr. Jo Handelsman, Howard Hughes Medical Institute Professor in the Department of Molecular, Cellular and Developmental Biology at Yale University, is a long-time devotee of scientific teaching, receiving this year's Presidential Award for Science Mentoring. She gave a seminar entitled "What is Scientific Teaching? The Changing Landscape of Science Education" as a part of the Scientific Education Colloquia Series in spring 2011. After dissecting what is wrong with the status quo of American scientific education, several ideological and practical changes are proposed, including active learning, regular assessment, diversity, and mentorship. Copyright © 2011.

  4. Compendium of Directors Colloquia 1999-2012

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephanie

    2013-01-01

    The Director s colloquium series was established primarily to provide a mechanism to bring high profile individuals to the Center to present new and innovative ideas to the entire Ames staff. More focused lecture series are arranged by specific divisions or departments. Before the year 1999, there is only a fragmentary record of who spoke in this series. Announcements to the staff were sent via land mail, and tickets were required to attend the colloquium. In 1999, the notification to attend colloquia became electronic and the entire resident staff was invited to attend. The centerwide announcement archive established in this timeframe created a lasting record of the Director s colloquia. The "Office of the Chief Scientist" at Ames had the responsibility for administering the colloquium series. When I became Chief Scientist on June 29, 1998, the program was not being used extensively and this continued to be the case through the years 1999-2002 of Harry McDonald s tenure as Director (see graph below). During Scott Hubbard s tenure as Director (September 19, 2002- February 15, 2006), the Director's colloquium series was used exclusively for high profile speakers from outside Ames whom he selected, such as lab directors from other research organizations around the Bay Area. During Pete Worden s tenure as Ames Director (May 4, 2006 -present) the colloquium series gained far greater use. First, I had greater discretion to select speakers for the colloquium series. Secondly, beginning in 2007, we established a 10-week Director's Colloquium Summer Series focused on enriching the experience of our many summer interns, and giving our top researchers within Ames Research Center an opportunity to present their work to the Center. The summer program has received rave reviews. This compendium contains a compilation of one-page descriptions (title, abstract and speaker biographies) for all of the 171 colloquia presented from the beginning of 1999 to October of 2012. The list of speakers includes four Nobel Laureates, six astronauts, three current or former Ames Center Directors, as well as many CEOs and other lab directors. Other featured speakers include famous mountain climbers, historians, movie stars, and former FBI agents and directors. Finally, the list includes world-class scientists and engineers representing a wide range of disciplines. It has been my privilege to host almost all of the colloquia presented in this compendium.

  5. Two-thirds of methodological research remained unpublished after presentation at Cochrane Colloquia: an empirical analysis.

    PubMed

    Chapman, Sarah; Eisinga, Anne; Hopewell, Sally; Clarke, Mike

    2012-05-01

    To determine the extent to which abstracts of methodology research, initially presented at annual meetings of The Cochrane Collaboration, have been published as full reports and over what period of time. A secondary aim was to explore whether full publication varied in different methodological subject areas. The Cochrane Methodology Register (CMR) was searched for all abstracts reporting methodology research, presented at the 11 Cochrane Colloquia from 1997 to 2007. EMBASE, PubMed, and CMR were searched for full publications of the same research. We identified 908 eligible conference abstracts and found full publications for 312 (34.4%) of these, almost half of which (47.1%) had appeared by the end of the first year after the relevant Colloquium. The proportion of abstracts that had not been published by 3 years was 69.7%, falling to 66.2% at 5 years. Publication varied considerably between different methodological areas. Approximately two-thirds of methodological research studies presented at Cochrane Colloquia remain unpublished as full papers at least 5 years later. This highlights the importance of searching conference abstracts if one wishes to find as comprehensive and complete a sample of methodological research as possible. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  6. LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    NASA Astrophysics Data System (ADS)

    Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor

    2017-12-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.

  7. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)

    NASA Astrophysics Data System (ADS)

    Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos

    2011-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village, as well as two banquets held at the Grand Hotel and Grand Formosa Regent in Taipei. The next CHEP conference will be held in New York, the United States on 21-25 May 2012. We would like to thank the National Science Council of Taiwan, the EU ACEOLE project, commercial sponsors, and the International Advisory Committee and the Programme Committee members for all their support and help. Special thanks to the Programme Committee members for their careful choice of conference contributions and enormous effort in reviewing and editing about 340 post conference proceedings papers. Simon C Lin CHEP 2010 Conference Chair and Proceedings Editor Taipei, Taiwan November 2011 Track Editors/ Programme Committee Chair Simon C Lin, Academia Sinica, Taiwan Online Computing Track Y H Chang, National Central University, Taiwan Harry Cheung, Fermilab, USA Niko Neufeld, CERN, Switzerland Event Processing Track Fabio Cossutti, INFN Trieste, Italy Oliver Gutsche, Fermilab, USA Ryosuke Itoh, KEK, Japan Software Engineering, Data Stores, and Databases Track Marco Cattaneo, CERN, Switzerland Gang Chen, Chinese Academy of Sciences, China Stefan Roiser, CERN, Switzerland Distributed Processing and Analysis Track Kai-Feng Chen, National Taiwan University, Taiwan Ulrik Egede, Imperial College London, UK Ian Fisk, Fermilab, USA Fons Rademakers, CERN, Switzerland Torre Wenaus, BNL, USA Computing Fabrics and Networking Technologies Track Harvey Newman, Caltech, USA Bernd Panzer-Steindel, CERN, Switzerland Antonio Wong, BNL, USA Ian Fisk, Fermilab, USA Niko Neufeld, CERN, Switzerland Grid and Cloud Middleware Track Alberto Di Meglio, CERN, Switzerland Markus Schulz, CERN, Switzerland Collaborative Tools Track Joao Correia Fernandes, CERN, Switzerland Philippe Galvez, Caltech, USA Milos Lokajicek, FZU Prague, Czech Republic International Advisory Committee Chair: Simon C. Lin , Academia Sinica, Taiwan Members: Mohammad Al-Turany , FAIR, Germany Sunanda Banerjee, Fermilab, USA Dario Barberis, CERN & Genoa University/INFN, Switzerland Lothar Bauerdick, Fermilab, USA Ian Bird, CERN, Switzerland Amber Boehnlein, US Department of Energy, USA Kors Bos, CERN, Switzerland Federico Carminati, CERN, Switzerland Philippe Charpentier, CERN, Switzerland Gang Chen, Institute of High Energy Physics, China Peter Clarke, University of Edinburgh, UK Michael Ernst, Brookhaven National Laboratory, USA David Foster, CERN, Switzerland Merino Gonzalo, CIEMAT, Spain John Gordon, STFC-RAL, UK Volker Guelzow, Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany John Harvey, CERN, Switzerland Frederic Hemmer, CERN, Switzerland Hafeez Hoorani, NCP, Pakistan Viatcheslav Ilyin, Moscow State University, Russia Matthias Kasemann, DESY, Germany Nobuhiko Katayama, KEK, Japan Milos Lokajícek, FZU Prague, Czech Republic David Malon, ANL, USA Pere Mato Vila, CERN, Switzerland Mirco Mazzucato, INFN CNAF, Italy Richard Mount, SLAC, USA Harvey Newman, Caltech, USA Mitsuaki Nozaki, KEK, Japan Farid Ould-Saada, University of Oslo, Norway Ruth Pordes, Fermilab, USA Hiroshi Sakamoto, The University of Tokyo, Japan Alberto Santoro, UERJ, Brazil Jim Shank, Boston University, USA Alan Silverman, CERN, Switzerland Randy Sobie , University of Victoria, Canada Dongchul Son, Kyungpook National University, South Korea Reda Tafirout , TRIUMF, Canada Victoria White, Fermilab, USA Guy Wormser, LAL, France Frank Wuerthwein, UCSD, USA Charles Young, SLAC, USA

  8. Scaling the CERN OpenStack cloud

    NASA Astrophysics Data System (ADS)

    Bell, T.; Bompastor, B.; Bukowiec, S.; Castro Leon, J.; Denis, M. K.; van Eldik, J.; Fermin Lobo, M.; Fernandez Alvarez, L.; Fernandez Rodriguez, D.; Marino, A.; Moreira, B.; Noel, B.; Oulevey, T.; Takase, W.; Wiebalck, A.; Zilli, S.

    2015-12-01

    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. In the past year, CERN Cloud Infrastructure has seen a constant increase in nodes, virtual machines, users and projects. This paper will present what has been done in order to make the CERN cloud infrastructure scale out.

  9. Argonne Physics Division Colloquium

    Science.gov Websites

    and the birth of gravitational wave astronomy Host: Seamus Riordan 11 May 2018 18 May 2018 Laura University of Illinois at Chicago Physics Department Colloquia Northwestern University Physics and Astronomy

  10. Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, Fernando H.; Jones, Robert; Kucharczyk, Katarzyna; Medrano Llamas, Ramón; van der Ster, Daniel

    2014-06-01

    The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula - the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

  11. CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme

    NASA Astrophysics Data System (ADS)

    Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.

    2017-10-01

    LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.

  12. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  13. CERN alerter—RSS based system for information broadcast to all CERN offices

    NASA Astrophysics Data System (ADS)

    Otto, R.

    2008-07-01

    Nearly every large organization uses a tool to broadcast messages and information across the internal campus (messages like alerts announcing interruption in services or just information about upcoming events). These tools typically allow administrators (operators) to send 'targeted' messages which are sent only to specific groups of users or computers, e/g only those located in a specified building or connected to a particular computing service. CERN has a long history of such tools: CERNVMS's SPM_quotMESSAGE command, Zephyr [2] and the most recent the NICE Alerter based on the NNTP protocol. The NICE Alerter used on all Windows-based computers had to be phased out as a consequence of phasing out NNTP at CERN. The new solution to broadcast information messages on the CERN campus continues to provide the service based on cross-platform technologies, hence minimizing custom developments and relying on commercial software as much as possible. The new system, called CERN Alerter, is based on RSS (Really Simple Syndication) [9] for the transport protocol and uses Microsoft SharePoint as the backend for database and posting interface. The windows-based client relies on Internet Explorer 7.0 with custom code to trigger the window pop-ups and the notifications for new events. Linux and Mac OS X clients could also rely on any RSS readers to subscribe to targeted notifications. The paper covers the architecture and implementation aspects of the new system.

  14. Self-service for software development projects and HPC activities

    NASA Astrophysics Data System (ADS)

    Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.

    2014-05-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  15. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    Arnold, Jeffrey

    2018-05-14

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  16. Future Approach to tier-0 extension

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.

    2017-10-01

    The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.

  17. Introduction to CERN

    ScienceCinema

    Heuer, R.-D.

    2018-02-19

    Summer Student Lecture Programme Introduction. The mission of CERN; push back the frontiers of knowledge, e.g. the secrets of the Big Bang...what was the matter like within the first moments of the Universe's existence? You have to develop new technologies for accelerators and detectors (also information technology--the Web and the GRID and medicine--diagnosis and therapy). There are three key technology areas at CERN; accelerating, particle detection, large-scale computing.

  18. Deployment and Operational Experiences with CernVM-FS at the GridKa Tier-1 Center

    NASA Astrophysics Data System (ADS)

    Alef, Manfred; Jäger, Axel; Petzold and, Andreas; Verstege, Bernhard

    2012-12-01

    In 2012 the GridKa Tier-1 computing center hosts 130 kHS06 computing resources and 14PB disk and 17PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides a solution tailored to the requirement of the LHC VOs. We will focus on the first operational experiences and the monitoring of CernVM-FS on the worker nodes and the squid caches.

  19. CERN Computing in Commercial Clouds

    NASA Astrophysics Data System (ADS)

    Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.

    2017-10-01

    By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.

  20. Communicating the Excitement of Science

    ScienceCinema

    Turner, Michael

    2017-12-09

    In this talk (which will include some exciting science) I will discuss some lessons I have learned about communicating science to scientists (in my own field and others), students, the public, the press, and policy makers in giving 500+ colloquia and seminars, 300+ public lectures and many informal presentations (including cocktail parties).

  1. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012)

    NASA Astrophysics Data System (ADS)

    Ernst, Michael; Düllmann, Dirk; Rind, Ofer; Wong, Tony

    2012-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at New York University on 21- 25 May 2012. CHEP is a major series of international conferences for physicists and computing professionals from the High Energy and Nuclear Physics community and related scientific and technical fields. The CHEP conference provides a forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18-month intervals, alternating between Europe, Asia, the Americas and other parts of the world. Recent CHEP conferences have been held in Taipei, Taiwan (2010); Prague, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, United States (2003); Beijing, China (2001); Padova, Italy (2000). CHEP 2012 was organized by Brookhaven National Laboratory (BNL) and co-sponsored by New York University. The organizational structure for CHEP consists of an International Advisory Committee (IAC) which sets the overall themes of the conference, a Program Organizing Committee (POC) that oversees the program content, and a Local Organizing Committee (LOC) that is responsible for local arrangements (lodging, transportation and social events) and conference logistics (registration, program scheduling, conference site selection and conference proceedings). There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 125 oral and 425 poster presentations and industrial exhibitions. We thank all the presenters for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Distributed Processing and Analysis on Grids and Clouds, Computer Facilities, Production Grids and Networking, Software Engineering, Data Stores and Databases and Collaborative Tools. We would like to thank Brookhaven Science Associates, New York University, Blue Nest Events, the International Advisory Committee, the Program Committee and the Local Organizing Committee members for all their support and assistance. We also would like to acknowledge the support provided by the following sponsors: ACEOLE, Data Direct Networks, Dell, the European Middleware Initiative and Nexsan. Special thanks to the Program Committee members for their careful choice of conference contributions and enormous effort in reviewing and editing the conference proceedings. The next CHEP conference will be held in Amsterdam, the Netherlands on 14-18 October 2013. Conference Chair Michael Ernst (BNL) Program Committee Daniele Bonacorsi, University of Bologna, Italy Simone Campana, CERN, Switzerland Philippe Canal, Fermilab, United States Sylvain Chapeland, CERN, Switzerland Dirk Düllmann, CERN, Switzerland Johannes Elmsheuser, Ludwig Maximilian University of Munich, Germany Maria Girone, CERN, Switzerland Steven Goldfarb, University of Michigan, United States Oliver Gutsche, Fermilab, United States Benedikt Hegner, CERN, Switzerland Andreas Heiss, Karlsruhe Institute of Technology, Germany Peter Hristov, CERN, Switzerland Tony Johnson, SLAC, United States David Lange, LLNL, United States Adam Lyon, Fermilab, United States Remigius Mommsen, Fermilab, United States Axel Naumann, CERN, Switzerland Niko Neufeld, CERN, Switzerland Rolf Seuster, TRIUMF, Canada Local Organizing Committee Maureen Anderson, John De Stefano, Mariette Faulkner, Ognian Novakov, Ofer Rind, Tony Wong (BNL) Kyle Cranmer (NYU) International Advisory Committee Mohammad Al-Turany, GSI, Germany Lothar Bauerdick, Fermilab, United States Ian Bird, CERN, Switzerland Dominique Boutigny, IN2P3, France Federico Carminati, CERN, Switzerland Marco Cattaneo, CERN, Switzerland Gang Chen, Institute of High Energy Physics, China Peter Clarke, University of Edinburgh, United Kingdom Sridhara Dasu, University of Wisconsin-Madison, United States Günter Duckeck, Ludwig Maximilian University of Munich, Germany Richard Dubois, SLAC, United States Michael Ernst, BNL, United States Ian Fisk, Fermilab, United States Gonzalo Merino, PIC, Spain John Gordon, STFC-RAL, United Kingdom Volker Gülzow, DESY, Germany Frederic Hemmer, CERN, Switzerland Viatcheslav Ilyin, Moscow State University, Russia Nobuhiko Katayama, KEK, Japan Alexei Klimentov, BNL, United States Simon C. Lin, Academia Sinica, Taiwan Milos Lokajícek, FZU Prague, Czech Republic David Malon, ANL, United States Pere Mato Vila, CERN, Switzerland Mauro Morandin, INFN CNAF, Italy Harvey Newman, Caltech, United States Farid Ould-Saada, University of Oslo, Norway Ruth Pordes, Fermilab, United States Hiroshi Sakamoto, University of Tokyo, Japan Alberto Santoro, UERJ, Brazil Jim Shank, Boston University, United States Dongchul Son, Kyungpook National University, South Korea Reda Tafirout, TRIUMF, Canada Stephen Wolbers, Fermilab, United States Frank Wuerthwein, UCSD, United States

  2. CERN's Common Unix and X Terminal Environment

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    The Desktop Infrastructure Group of CERN's Computing and Networks Division has developed a Common Unix and X Terminal Environment to ease the migration to Unix based Interactive Computing. The CUTE architecture relies on a distributed filesystem—currently Trans arc's AFS—to enable essentially interchangeable client work-stations to access both "home directory" and program files transparently. Additionally, we provide a suite of programs to configure workstations for CUTE and to ensure continued compatibility. This paper describes the different components and the development of the CUTE architecture.

  3. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  4. Rochester scientist discovers new comet with Dark Energy Camera (DECam) at

    Science.gov Websites

    Sites Group MASS-DIMM New Projects NOAO Future Instrumentation DECam SAM LSST MONSOON What is MONSOON AURA Sites Group Talks and Meetings Upcoming Colloquia Sky Conditions CTIO Site Conditions TASCA colleagues believe. David Cameron, a visiting scientist in Eric Mamajek's research group in the Department of

  5. Significant wilderness qualities: can they be identified and monitored?

    Treesearch

    David N. Cole; Robert C. Lucas

    1989-01-01

    The third Research Colloquium, sponsored by the National Outdoor Leadership School (NOLS), convened the week of August 10-15 in the Popo Agie Wilderness, Shoshone National Forest, Wyoming. The purpose of these colloquia is to facilitate interaction and discussion between wilderness managers, researchers, and NOLS personnel in a wilderness setting. At each colloquium,...

  6. Democracy and Higher Education in South Africa: A Dialogue in Tension

    ERIC Educational Resources Information Center

    Soudien, Crain

    2006-01-01

    This article is a critical overview of the symposium on the contribution of the quality assurance process to democracy recently held at the University of Stellenbosch. It argues that recent symposia and colloquia in which South Africans themselves have attempted to stake out their intellectual credentials are extremely important and that the…

  7. Blanco Webcams | CTIO

    Science.gov Websites

    the slit, then the DECam image is being occluded. The small circle is the field of view of DECam on Meetings Upcoming Colloquia Sky Conditions CTIO Site Conditions TASCA RASICAM Infrared Sky Image CTIO Guidelines Library Facilities Outreach NOAO-S EPO Program team Art of Darkness Image Gallery EPO/CADIAS

  8. Experience with procuring, deploying and maintaining hardware at remote co-location centre

    NASA Astrophysics Data System (ADS)

    Bärring, O.; Bonfillou, E.; Clement, B.; Coelho Dos Santos, M.; Dore, V.; Gentit, A.; Grossir, A.; Salter, W.; Valsan, L.; Xafi, A.

    2014-05-01

    In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to CERN's central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The contract includes a 'remote-hands' services for physical handling of hardware (rack mounting, cabling, pushing power buttons, ...) and maintenance repairs (swapping disks, memory modules, ...). However, only CERN personnel have network and console access to the equipment for system administration. This report gives an insight to adaptations of hardware architecture, procurement and delivery procedures undertaken enabling remote physical handling of the hardware. We will also describe tools and procedures developed for automating the registration, burn-in testing, acceptance and maintenance of the equipment as well as an independent but important change to the IT assets management (ITAM) developed in parallel as part of the CERN IT Agile Infrastructure project. Finally, we will report on experience from the first large delivery of 400 servers and 80 SAS JBOD expansion units (24 drive bays) to Wigner in March 2013. Changes were made to the abstract file on 13/06/2014 to correct errors, the pdf file was unchanged.

  9. Cancer Health Disparities Research: Where have we been and where should we go? | Division of Cancer Prevention

    Cancer.gov

    Speaker | Scarlett Lin Gomez, PhD, MPH will present the next CPFP Colloquia lecture entitled, "Cancer Health Disparities Research: Where have we been and where should we go?" Date: May 15, 2018; Time: 11:00 am - 12:00 pm; Location: NCI Shady Grove campus, Conference Room: 2W-910/912.

  10. Cosmic x ray physics

    NASA Technical Reports Server (NTRS)

    Mccammon, Dan; Cox, D. P.; Kraushaar, W. L.; Sanders, W. T.

    1992-01-01

    This final report covers the period 1 January 1985 - 31 March 1992. It is divided into the following sections: the soft x-ray background; proportional counter and filter calibrations; sounding rocket flight preparations; new sounding rocket payload: x-ray calorimeter; and theoretical studies. Staff, publications, conference proceedings, invited talks, contributed talks, colloquia and seminars, public service lectures, and Ph. D. theses are listed.

  11. Second Language Research Forum Colloquia 2009: Colloquium--Language Learning Abroad: Insights from the Missionary Experience

    ERIC Educational Resources Information Center

    Hansen, Lynne

    2011-01-01

    Recent years have brought increasing attention to studies of language acquisition in a country where the language is spoken, as opposed to formal language study in classrooms. Research on language learners in immersion contexts is important, as the question of whether study abroad is valuable is still somewhat controversial among researchers…

  12. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    NASA Astrophysics Data System (ADS)

    Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.

    2011-12-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  13. On JALT 95: Curriculum and Evaluation. Proceedings of the JALT International Conference on Language Teaching/Learning (22nd, Nagoya, Japan, November 1995). Section Six: In the Classroom.

    ERIC Educational Resources Information Center

    Brown, James Dean; And Others

    Texts of conference papers and summaries of colloquia on classroom environment and interaction in second language teaching are presented, including: "Fluency Development" (James Dean Brown); "Learner Development: Three Designs" (in Japanese) (Hiroko Naito, Yoshitake Tonia, Takao Kinugawa, Morio Hamada); "Desirable Japanese Teachers and Classroom…

  14. We are Not Hard-to-Reach: Community Competent Research to Address Racial Tobacco-Related Disparities | Division of Cancer Prevention

    Cancer.gov

    Speaker | Monica Webb Hooper, PhD, Associate Director for Cancer Disparities Research, Professor of Oncology, Family Medicine, Epidemiology & Biostatistics, and Psychological Sciences at Case Comprehensive Cancer Center Case Western Reserve University in Cleveland, OH will present the next CPFP Colloquia lecture entitled, "We are Not Hard-to-Reach: Community Competent Research

  15. On JALT 95: Curriculum and Evaluation. Proceedings of the JALT International Conference on Language Teaching/Learning (22nd, Nagoya, Japan, November 1995). Section Two: Curriculum Design.

    ERIC Educational Resources Information Center

    Harrison, Ian; And Others

    Texts of conference papers and summaries of colloquia on second language curriculum design are presented, including: "Competency Assessment in Curriculum Renewal" (summary of session with Ian Harrison, Francis Johnson, Christopher Candlin, Anthony Green, David Nunan, Charles Smith); "The Evolving of a Curriculum" (Hiroshi Abe,…

  16. Second Language Research Forum Colloquia 2009: Colloquium--Measuring the Effectiveness of Focus on Form

    ERIC Educational Resources Information Center

    Loewen, Shawn

    2011-01-01

    Focus on form, i.e. brief attention to language items within a larger meaning-focused context (Long 1991; Ellis 2001), occurs in a variety of L2 instructional contexts. Meta-analyses of the effectiveness of focus on form have found overall positive effects; however, these meta-analyses have commented on the small number of studies available for…

  17. The evolution of the ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Jonsson, O. C.; Catherall, R.; Deloose, I.; Drumm, P.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Isolde Collaboration

    The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows ™ through a Novell NetWare4 ™ local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.

  18. The evolution of the ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Jonsson, O. C.; Catherall, R.; Deloose, I.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Drumm, P.

    1996-04-01

    The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows® through a Novell NetWare4® local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.

  19. Global EOS: exploring the 300-ms-latency region

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Jericho, D.; Hsu, C.-Y.

    2017-10-01

    EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.

  20. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  1. Commissioning of a CERN Production and Analysis Facility Based on xrootd

    NASA Astrophysics Data System (ADS)

    Campana, Simone; van der Ster, Daniel C.; Di Girolamo, Alessandro; Peters, Andreas J.; Duellmann, Dirk; Coelho Dos Santos, Miguel; Iven, Jan; Bell, Tim

    2011-12-01

    The CERN facility hosts the Tier-0 of the four LHC experiments, but as part of WLCG it also offers a platform for production activities and user analysis. The CERN CASTOR storage technology has been extensively tested and utilized for LHC data recording and exporting to external sites according to experiments computing model. On the other hand, to accommodate Grid data processing activities and, more importantly, chaotic user analysis, it was realized that additional functionality was needed including a different throttling mechanism for file access. This paper will describe the xroot-based CERN production and analysis facility for the ATLAS experiment and in particular the experiment use case and data access scenario, the xrootd redirector setup on top of the CASTOR storage system, the commissioning of the system and real life experience for data processing and data analysis.

  2. Medical Applications at CERN and the ENLIGHT Network

    PubMed Central

    Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh

    2016-01-01

    State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN. PMID:26835422

  3. Medical Applications at CERN and the ENLIGHT Network.

    PubMed

    Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh

    2016-01-01

    State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN.

  4. The Sky This Week, 2016 March 1 - 8 - Naval Oceanography Portal

    Science.gov Websites

    submit many observations from different locations throughout the year. This is a great time to start are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 March 1 - 8 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky

  5. Some new mathematical methods for variational objective analysis

    NASA Technical Reports Server (NTRS)

    Wahba, Grace; Johnson, Donald R.

    1994-01-01

    Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.

  6. Hangout with CERN: a direct conversation with the public

    NASA Astrophysics Data System (ADS)

    Rao, Achintya; Goldfarb, Steven; Kahle, Kate

    2016-04-01

    Hangout with CERN refers to a weekly, half-hour-long, topical webcast hosted at CERN. The aim of the programme is threefold: (i) to provide a virtual tour of various locations and facilities at CERN, (ii) to discuss the latest scientific results from the laboratory, and, most importantly, (iii) to engage in conversation with the public and answer their questions. For each ;episode;, scientists gather around webcam-enabled computers at CERN and partner institutes/universities, connecting to one another using the Google+ social network's ;Hangouts; tool. The show is structured as a conversation mediated by a host, usually a scientist, and viewers can ask questions to the experts in real time through a Twitter hashtag or YouTube comments. The history of Hangout with CERN can be traced back to ICHEP 2012, where several physicists crowded in front of a laptop connected to Google+, using a ;Hangout On Air; webcast to explain to the world the importance of the discovery of the Higgs-like boson, announced just two days before at the same conference. Hangout with CERN has also drawn inspiration from two existing outreach endeavours: (i) ATLAS Virtual Visits, which connected remote visitors with scientists in the ATLAS Control Room via video conference, and (ii) the Large Hangout Collider, in which CMS scientists gave underground tours via Hangouts to groups of schools and members of the public around the world. In this paper, we discuss the role of Hangout with CERN as a bi-directional outreach medium and an opportunity to train scientists in effective communication.

  7. The Sky This Week, 2016 January 19 - 26 - Naval Oceanography Portal

    Science.gov Websites

    are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 January 19 - 26 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky This Week, 2016 January 19 - 26 Info The Sky This Week, 2016 January 19 - 26 See all the bright planets

  8. The Sky This Week, 2016 April 19 - 26 - Naval Oceanography Portal

    Science.gov Websites

    are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 April 19 - 26 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky This Week, 2016 April 19 - 26 Info The Sky This Week, 2016 April 19 - 26 A bright and speedy star

  9. The Sky This Week, 2015 December 8 - 15 - Naval Oceanography Portal

    Science.gov Websites

    are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2015 December 8 - 15 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky This Week, 2015 December 8 - 15 Info The Sky This Week, 2015 December 8 - 15 The year's best meteor

  10. The Sky This Week, 2016 February 16 - 23 - Naval Oceanography Portal

    Science.gov Websites

    very nice pentagon shape. The southernmost star in the pentagon, Al Nath, is "shared" as the are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 February 16 - 23 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky

  11. Novel apparatus and methods for performing remotely controlled particle-solid interaction experiments at CERN

    NASA Astrophysics Data System (ADS)

    Krause, H. F.; Deveney, E. F.; Jones, N. L.; Vane, C. R.; Datz, S.; Knudsen, H.; Grafström, P.; Schuch, R.

    1997-04-01

    Recent atomic physics studies involving ultrarelativistic Pb ions required solid target positioners, scintillators, and a sophisticated data acquisition and control system placed in a remote location at the CERN Super Proton Synchrotron near Geneva, Switzerland. The apparatus, installed in a high-radiation zone underground, had to (i) function for months, (ii) automatically respond to failures such as power outages and particle-induced computer upsets, and (iii) communicate with the outside world via a telephone line. The heart of the apparatus developed was an Apple Macintosh-based CAMAC system that answered the telephone and interpreted and executed remote control commands that (i) sensed and set targets, (ii) controlled voltages and discriminator levels for scintillators, (iii) modified data acquisition hardware logic, (iv) reported control information, and (v) automatically synchronized data acquisition to the CERN spill cycle via a modem signal and transmitted experimental data to a remote computer. No problems were experienced using intercontinental telephone connections at 1200 baud. Our successful "virtual laboratory" approach that uses off-the-shelf electronics is generally adaptable to more conventional bench-type experiments.

  12. First results from a combined analysis of CERN computing infrastructure metrics

    NASA Astrophysics Data System (ADS)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  13. Service management at CERN with Service-Now

    NASA Astrophysics Data System (ADS)

    Toteva, Z.; Alvarez Alonso, R.; Alvarez Granda, E.; Cheimariou, M.-E.; Fedorko, I.; Hefferman, J.; Lemaitre, S.; Clavo, D. Martin; Martinez Pedreira, P.; Pera Mira, O.

    2012-12-01

    The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal - to bring the services closer to the end user based on Information Technology Infrastructure Library (ITIL) best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfilment processes which are based on a unique two-dimensional service catalogue that combines both the user and the support team views of all services. After an extensive evaluation of the available industrial solutions, Service-now was selected as the tool to implement the CERN Service-Management processes. The initial release of the tool provided an attractive web portal for the users and successfully implemented two basic ITIL processes; the incident management and the request fulfilment processes. It also integrated with the CERN personnel databases and the LHC GRID ticketing system. Subsequent releases continued to integrate with other third-party tools like the facility management systems of CERN as well as to implement new processes such as change management. Independently from those new development activities it was decided to simplify the request fulfilment process in order to achieve easier acceptance by the CERN user community. We believe that due to the high modularity of the Service-now tool, the parallel design of ITIL processes e.g., event management and non-ITIL processes, e.g., computer centre hardware management, will be easily achieved. This presentation will describe the experience that we have acquired and the techniques that were followed to achieve the CERN customization of the Service-Now tool.

  14. The Sky This Week, 2016 March 15 - 23 - Naval Oceanography Portal

    Science.gov Websites

    are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 March 15 - 23 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky This Week, 2016 March 15 - 23 Info The Sky This Week, 2016 March 15 - 23 The equinox and the calendar

  15. The Sky This Week, 2016 January 12 - 19 - Naval Oceanography Portal

    Science.gov Websites

    would be very different in that case! The planets are now beginning to span more of the night. Leading are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 January 12 - 19 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky

  16. The Sky This Week, 2016 January 5 - 12 - Naval Oceanography Portal

    Science.gov Websites

    are here: Home › USNO › News, Tours & Events › Sky This Week › The Sky This Week, 2016 January 5 - 12 USNO Logo USNO Navigation Tour Information USNO Scientific Colloquia Sky This Week The Sky This Week, 2016 January 5 - 12 Info The Sky This Week, 2016 January 5 - 12 Count the stars in Orion for

  17. Remote Sensing of Aerosols from Satellites: Why Has It Been Do Difficult to Quantify Aerosol-Cloud Interactions for Climate Assessment, and How Can We Make Progress?

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.

    2015-01-01

    The organizers of the National Academy of Sciences Arthur M. Sackler Colloquia Series on Improving Our Fundamental Understanding of the Role of Aerosol-Cloud Interactions in the Climate System would like to post Ralph Kahn's presentation entitled Remote Sensing of Aerosols from Satellites: Why has it been so difficult to quantify aerosol-cloud interactions for climate assessment, and how can we make progress? to their public website.

  18. Unified Monitoring Architecture for IT and Grid Services

    NASA Astrophysics Data System (ADS)

    Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.

    2017-10-01

    This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.

  19. Installation and management of the SPS and LEP control system computers

    NASA Astrophysics Data System (ADS)

    Bland, Alastair

    1994-12-01

    Control of the CERN SPS and LEP accelerators and service equipment on the two CERN main sites is performed via workstations, file servers, Process Control Assemblies (PCAs) and Device Stub Controllers (DSCs). This paper describes the methods and tools that have been developed to manage the file servers, PCAs and DSCs since the LEP startup in 1989. There are five operational DECstation 5000s used as file servers and boot servers for the PCAs and DSCs. The PCAs consist of 90 SCO Xenix 386 PCs, 40 LynxOS 486 PCs and more than 40 older NORD 100s. The DSCs consist of 90 OS-968030 VME crates and 10 LynxOS 68030 VME crates. In addition there are over 100 development systems. The controls group is responsible for installing the computers, starting all the user processes and ensuring that the computers and the processes run correctly. The operators in the SPS/LEP control room and the Services control room have a Motif-based X window program which gives them, in real time, the state of all the computers and allows them to solve problems or reboot them.

  20. EFQPSK Versus CERN: A Comparative Study

    NASA Technical Reports Server (NTRS)

    Borah, Deva K.; Horan, Stephen

    2001-01-01

    This report presents a comparative study on Enhanced Feher's Quadrature Phase Shift Keying (EFQPSK) and Constrained Envelope Root Nyquist (CERN) techniques. These two techniques have been developed in recent times to provide high spectral and power efficiencies under nonlinear amplifier environment. The purpose of this study is to gain insights into these techniques and to help system planners and designers with an appropriate set of guidelines for using these techniques. The comparative study presented in this report relies on effective simulation models and procedures. Therefore, a significant part of this report is devoted to understanding the mathematical and simulation models of the techniques and their set-up procedures. In particular, mathematical models of EFQPSK and CERN, effects of the sampling rate in discrete time signal representation, and modeling of nonlinear amplifiers and predistorters have been considered in detail. The results of this study show that both EFQPSK and CERN signals provide spectrally efficient communications compared to filtered conventional linear modulation techniques when a nonlinear power amplifier is used. However, there are important differences. The spectral efficiency of CERN signals, with a small amount of input backoff, is significantly better than that of EFQPSK signals if the nonlinear amplifier is an ideal clipper. However, to achieve such spectral efficiencies with a practical nonlinear amplifier, CERN processing requires a predistorter which effectively translates the amplifier's characteristics close to those of an ideal clipper. Thus, the spectral performance of CERN signals strongly depends on the predistorter. EFQPSK signals, on the other hand, do not need such predistorters since their spectra are almost unaffected by the nonlinear amplifier, Ibis report discusses several receiver structures for EFQPSK signals. It is observed that optimal receiver structures can be realized for both coded and uncoded EFQPSK signals with not too much increase in computational complexity. When a nonlinear amplifier is used, the bit error rate (BER) performance of the CERN signals with a matched filter receiver is found to be more than one decibel (dB) worse compared to the bit error performance of EFQPSK signals. Although channel coding is found to provide BER performance improvement for both EFQPSK and CERN signals, the performance of EFQPSK signals remains better than that of CERN. Optimal receiver structures for CERN signals with nonlinear equalization is left as a possible future work. Based on the numerical results, it is concluded that, in nonlinear channels, CERN processing leads towards better bandwidth efficiency with a compromise in power efficiency. Hence for bandwidth efficient communications needs, CERN is a good solution provided effective adaptive predistorters can be realized. On the other hand, EFQPSK signals provide a good power efficient solution with a compromise in band width efficiency.

  1. A practical approach to virtualization in HEP

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.

    2011-01-01

    In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.

  2. Pc as Physics Computer for Lhc ?

    NASA Astrophysics Data System (ADS)

    Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.

  3. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  4. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE PAGES

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    2018-03-19

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  5. Data acquisition using the 168/E. [CERN ISR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, J.T.; Cittolin, S.; Demoulin, M.

    1983-03-01

    Event sizes and data rates at the CERN anti p p collider compose a formidable environment for a high level trigger. A system using three 168/E processors for experiment UA1 real-time event selection is described. With 168/E data memory expanded to 512K bytes, each processor holds a complete event allowing a FORTRAN trigger algorithm access to data from the entire detector. A smart CAMAC interface reads five Remus branches in parallel transferring one word to the target processor every 0.5 ..mu..s. The NORD host computer can simultaneously read an accepted event from another processor.

  6. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    NASA Astrophysics Data System (ADS)

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-12-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.

  7. [Actuator placement for active sound and vibration control

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Two refereed journal publications and ten talks given at conferences, seminars, and colloquia resulted from research supported by NASA. They are itemized in this report. The two publications were entitled "Reactive Tabu and Search Sensor Selection in Active Structural Acoustic Control Problems" and "Quelling Cabin Noise in Turboprop Aircraft via Active Control." The conference presentations covered various aspects of actuator placement, including location problems, for active sound and vibration control of cylinders, of commuter jets, of propeller driven or turboprop aircraft, and for quelling aircraft cabin or interior noise.

  8. The International Colloquium on Atomic Spectra and Oscillator Strengths for Astrophysical and Laboratory Plasmas

    NASA Technical Reports Server (NTRS)

    Sugar, J.; Leckrone, D.

    1993-01-01

    This was the fourth in a series of colloquia begun at the University of Lund, Sweden in 1983 and subsequently held in Toledo, Ohio and Amsterdam, The Netherlands. The purpose of these meetings is to provide an international forum for communication between major users of atomic spectroscopic data and the providers of these data. These data include atomic wavelengths, line shapes, energy levels, lifetimes, and oscillator strengths. Speakers were selected from a wide variety of disciplines including astrophysics, laboratory plasma research, spectrochemistry, and theoretical and experimental atomic physics.

  9. Migration of the CERN IT Data Centre Support System to ServiceNow

    NASA Astrophysics Data System (ADS)

    Alvarez Alonso, R.; Arneodo, G.; Barring, O.; Bonfillou, E.; Coelho dos Santos, M.; Dore, V.; Lefebure, V.; Fedorko, I.; Grossir, A.; Hefferman, J.; Mendez Lorenzo, P.; Moller, M.; Pera Mira, O.; Salter, W.; Trevisani, F.; Toteva, Z.

    2014-06-01

    The large potential and flexibility of the ServiceNow infrastructure based on "best practises" methods is allowing the migration of some of the ticketing systems traditionally used for the monitoring of the servers and services available at the CERN IT Computer Centre. This migration enables the standardization and globalization of the ticketing and control systems implementing a generic system extensible to other departments and users. One of the activities of the Service Management project together with the Computing Facilities group has been the migration of the ITCM structure based on Remedy to ServiceNow within the context of one of the ITIL processes called Event Management. The experience gained during the first months of operation has been instrumental towards the migration to ServiceNow of other service monitoring systems and databases. The usage of this structure is also extended to the service tracking at the Wigner Centre in Budapest.

  10. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  11. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  12. Deconvoluting the Complexity of Bone Metastatic Prostate Cancer via Computational Modeling

    DTIC Science & Technology

    2016-09-01

    Fellowship (2015-2017) Consejo Nacional de Ciencia y Tecnologia (CONACYT) MRes/PhD scholarship (2007- 2011) CERN Teacher Programme scholarship (2007...UDLAP Apoyo a Ciencias BSc scholarship (2000-2005) Awards Society for Mathematical Biology (SMB) Travel Award (2015

  13. Integrating Containers in the CERN Private Cloud

    NASA Astrophysics Data System (ADS)

    Noel, Bertrand; Michelino, Davide; Velten, Mathieu; Rocha, Ricardo; Trigazis, Spyridon

    2017-10-01

    Containers remain a hot topic in computing, with new use cases and tools appearing every day. Basic functionality such as spawning containers seems to have settled, but topics like volume support or networking are still evolving. Solutions like Docker Swarm, Kubernetes or Mesos provide similar functionality but target different use cases, exposing distinct interfaces and APIs. The CERN private cloud is made of thousands of nodes and users, with many different use cases. A single solution for container deployment would not cover every one of them, and supporting multiple solutions involves repeating the same process multiple times for integration with authentication services, storage services or networking. In this paper we describe OpenStack Magnum as the solution to offer container management in the CERN cloud. We will cover its main functionality and some advanced use cases using Docker Swarm and Kubernetes, highlighting some relevant differences between the two. We will describe the most common use cases in HEP and how we integrated popular services like CVMFS or AFS in the most transparent way possible, along with some limitations found. Finally we will look into ongoing work on advanced scheduling for both Swarm and Kubernetes, support for running batch like workloads and integration of container networking technologies with the CERN infrastructure.

  14. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  15. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-12-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  16. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  17. Photoproduction of vector mesons in proton-proton ultraperipheral collisions at the CERN Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Xie, Ya-Ping; Chen, Xurong

    2018-05-01

    Photoproduction of vector mesons is computed with dipole model in proton-proton ultraperipheral collisions (UPCs) at the CERN Large Hadron Collider (LHC). The dipole model framework is employed in the calculations of vector mesons production in diffractive processes. Parameters of the bCGC model are refitted with the latest inclusive deep inelastic scattering experimental data. Employing the bCGC model and boosted Gaussian light-cone wave function for vector mesons, we obtain the prediction of rapidity distributions of J/ψ and ψ(2s) mesons in proton-proton ultraperipheral collisions at the LHC. The predictions give a good description of the experimental data of LHCb. Predictions of ϕ and ω mesons are also evaluated in this paper.

  18. How to deal with petabytes of data: the LHC Grid project

    NASA Astrophysics Data System (ADS)

    Britton, D.; Lloyd, S. L.

    2014-06-01

    We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing.

  19. EDITORIAL: XVI Brazilian Colloquium on Orbital Dynamics

    NASA Astrophysics Data System (ADS)

    de Melo, Cristiano F.; Macau, Elbert E. N.; Prado, Antonio B. A.; Hetem Jnr, Annibal

    2013-10-01

    The XVI Brazilian Colloquium on Orbital Dynamics was held from 26-30 November 2012, at the Biazi Grand Hotel, Serra Negra, São Paulo, Brazil. The Brazilian Colloquia on Orbital Dynamics are scientific events that occur bi-annually and are designed to develop those areas of research in celestial mechanics, orbital dynamics, planetary science, fundamental astronomy, aerospace engineering, and nonlinear systems and chaos. The meeting has been held for 30 years and it brings together researchers, professors and students from South American and also from other continents. Acknowledgements National Council for Scientific and Technological Development - CNPq Coordination for the Improvement of Higher Level - CAPES São Paulo Research Foundation - FAPESP

  20. CernVM WebAPI - Controlling Virtual Machines from the Web

    NASA Astrophysics Data System (ADS)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  1. Preliminary Investigation of the Environmental Sensitivity of Acoustic Signal Transmission in the Wavenumber Domain with Respect to Source Depth Determination.

    DTIC Science & Technology

    1982-12-01

    Coppens showed great kindness by accepting supervision of this research when time was short. Vis con - cern, understanding and direzticn led to an...related to computer processing time and storage requirements. These factors will not he addressed directly in this resear:h because the pro - cessing...computational efficiency. Disadvantages are a uniform mesh and periodic boundary con - ditions to satisfy the FFT, and filtering of tho sound speed profile by

  2. EOS developments

    NASA Astrophysics Data System (ADS)

    Sindrilaru, Elvin A.; Peters, Andreas J.; Adde, Geoffray M.; Duellmann, Dirk

    2017-10-01

    CERN has been developing and operating EOS as a disk storage solution successfully for over 6 years. The CERN deployment provides 135 PB and stores 1.2 billion replicas distributed over two computer centres. Deployment includes four LHC instances, a shared instance for smaller experiments and since last year an instance for individual user data as well. The user instance represents the backbone of the CERNBOX service for file sharing. New use cases like synchronisation and sharing, the planned migration to reduce AFS usage at CERN and the continuous growth has brought EOS to new challenges. Recent developments include the integration and evaluation of various technologies to do the transition from a single active in-memory namespace to a scale-out implementation distributed over many meta-data servers. The new architecture aims to separate the data from the application logic and user interface code, thus providing flexibility and scalability to the namespace component. Another important goal is to provide EOS as a CERN-wide mounted filesystem with strong authentication making it a single storage repository accessible via various services and front- ends (/eos initiative). This required new developments in the security infrastructure of the EOS FUSE implementation. Furthermore, there were a series of improvements targeting the end-user experience like tighter consistency and latency optimisations. In collaboration with Seagate as Openlab partner, EOS has a complete integration of OpenKinetic object drive cluster as a high-throughput, high-availability, low-cost storage solution. This contribution will discuss these three main development projects and present new performance metrics.

  3. Space charge problems in high intensity RFQs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, M.

    1996-06-01

    Measurements were made to check the performance of the CERN high intensity RFQs (RFQ2A and RFQ2B) and assess the validity of the design approach; the study of space charge effects was undertaken in this context. RFQ2A and RFQ2B are 200 mA, 750 keV proton accelerators, operating at 202.56 MHz. Since the beginning of 1993, RFQ2B serves as injector to the CERN 50 MeV Alvarez linac (Linac 2). In 1992, both RFQs were on the test stand to undergo a series of beam measurements, which were compared with computations. The studies concerning the RFQ2A were more detailed and they are reportedmore » in this paper. {copyright} {ital 1996 American Institute of Physics.}« less

  4. ATLAS@Home: Harnessing Volunteer Computing for HEP

    NASA Astrophysics Data System (ADS)

    Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration

    2015-12-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  5. Graphics Processing Units for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  6. The ALICE Software Release Validation cluster

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Krzewicki, M.

    2015-12-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.

  7. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  8. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  9. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  10. AGIS: Evolution of Distributed Computing information system for ATLAS

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  11. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  12. The CMS Tier0 goes cloud and grid for LHC Run 2

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less

  13. ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  14. Tape SCSI monitoring and encryption at CERN

    NASA Astrophysics Data System (ADS)

    Laskaridis, Stefanos; Bahyl, V.; Cano, E.; Leduc, J.; Murray, S.; Cancio, G.; Kruse, D.

    2017-10-01

    CERN currently manages the largest data archive in the HEP domain; over 180PB of custodial data is archived across 7 enterprise tape libraries containing more than 25,000 tapes and using over 100 tape drives. Archival storage at this scale requires a leading edge monitoring infrastructure that acquires live and lifelong metrics from the hardware in order to assess and proactively identify potential drive and media level issues. In addition, protecting the privacy of sensitive archival data is becoming increasingly important and with it the need for a scalable, compute-efficient and cost-effective solution for data encryption. In this paper, we first describe the implementation of acquiring tape medium and drive related metrics reported by the SCSI interface and its integration with our monitoring system. We then address the incorporation of tape drive real-time encryption with dedicated drive hardware into the CASTOR [1] hierarchical mass storage system.

  15. The CMS TierO goes Cloud and Grid for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Hufnagel, Dirk

    2015-12-01

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.

  16. The Evolution of CERN EDMS

    NASA Astrophysics Data System (ADS)

    Wardzinska, Aleksandra; Petit, Stephan; Bray, Rachel; Delamare, Christophe; Garcia Arza, Griselda; Krastev, Tsvetelin; Pater, Krzysztof; Suwalska, Anna; Widegren, David

    2015-12-01

    Large-scale long-term projects such as the LHC require the ability to store, manage, organize and distribute large amounts of engineering information, covering a wide spectrum of fields. This information is a living material, evolving in time, following specific lifecycles. It has to reach the next generations of engineers so they understand how their predecessors designed, crafted, operated and maintained the most complex machines ever built. This is the role of CERN EDMS. The Engineering and Equipment Data Management Service has served the High Energy Physics Community for over 15 years. It is CERN's official PLM (Product Lifecycle Management), supporting engineering communities in their collaborations inside and outside the laboratory. EDMS is integrated with the CAD (Computer-aided Design) and CMMS (Computerized Maintenance Management) systems used at CERN providing tools for engineers who work in different domains and who are not PLM specialists. Over the years, human collaborations and machines grew in size and complexity. So did EDMS: it is currently home to more than 2 million files and documents, and has over 6 thousand active users. In April 2014 we released a new major version of EDMS, featuring a complete makeover of the web interface, improved responsiveness and enhanced functionality. Following the results of user surveys and building upon feedback received from key users group, we brought what we think is a system that is more attractive and makes it easy to perform complex tasks. In this paper we will describe the main functions and the architecture of EDMS. We will discuss the available integration options, which enable further evolution and automation of engineering data management. We will also present our plans for the future development of EDMS.

  17. A world-wide databridge supported by a commercial cloud provider

    NASA Astrophysics Data System (ADS)

    Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio

    2017-10-01

    Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.

  18. Section Editors

    NASA Astrophysics Data System (ADS)

    Groep, D. L.; Bonacorsi, D.

    2014-06-01

    1. Data Acquisition, Trigger and Controls Niko NeufeldCERNniko.neufeld@cern.ch Tassos BeliasDemokritosbelias@inp.demokritos.gr Andrew NormanFNALanorman@fnal.gov Vivian O'DellFNALodell@fnal.gov 2. Event Processing, Simulation and Analysis Rolf SeusterTRIUMFseuster@cern.ch Florian UhligGSIf.uhlig@gsi.de Lorenzo MonetaCERNLorenzo.Moneta@cern.ch Pete ElmerPrincetonpeter.elmer@cern.ch 3. Distributed Processing and Data Handling Nurcan OzturkU Texas Arlingtonnurcan@uta.edu Stefan RoiserCERNstefan.roiser@cern.ch Robert IllingworthFNAL Davide SalomoniINFN CNAFDavide.Salomoni@cnaf.infn.it Jeff TemplonNikheftemplon@nikhef.nl 4. Data Stores, Data Bases, and Storage Systems David LangeLLNLlange6@llnl.gov Wahid BhimjiU Edinburghwbhimji@staffmail.ed.ac.uk Dario BarberisGenovaDario.Barberis@cern.ch Patrick FuhrmannDESYpatrick.fuhrmann@desy.de Igor MandrichenkoFNALivm@fnal.gov Mark van de SandenSURF SARA sanden@sara.nl 5. Software Engineering, Parallelism & Multi-Core Solveig AlbrandLPSC/IN2P3solveig.albrand@lpsc.in2p3.fr Francesco GiacominiINFN CNAFfrancesco.giacomini@cnaf.infn.it Liz SextonFNALsexton@fnal.gov Benedikt HegnerCERNbenedikt.hegner@cern.ch Simon PattonLBNLSJPatton@lbl.gov Jim KowalkowskiFNAL jbk@fnal.gov 6. Facilities, Infrastructures, Networking and Collaborative Tools Maria GironeCERNMaria.Girone@cern.ch Ian CollierSTFC RALian.collier@stfc.ac.uk Burt HolzmanFNALburt@fnal.gov Brian Bockelman U Nebraskabbockelm@cse.unl.edu Alessandro de SalvoRoma 1Alessandro.DeSalvo@ROMA1.INFN.IT Helge MeinhardCERN Helge.Meinhard@cern.ch Ray PasetesFNAL rayp@fnal.gov Steven GoldfarbU Michigan Steven.Goldfarb@cern.ch

  19. PaaS for web applications with OpenShift Origin

    NASA Astrophysics Data System (ADS)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  20. Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters

    NASA Astrophysics Data System (ADS)

    Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.

    Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.

  1. CERN and high energy physics, the grand picture

    ScienceCinema

    Heuer, Rolf-Dieter

    2018-05-24

    The lecture will touch on several topics, to illustrate the role of CERN in the present and future of high-energy physics: how does CERN work? What is the role of the scientific community, of bodies like Council and SPC, and of international cooperation, in the definition of CERN's scientific programme? What are the plans for the future of the LHC and of the non-LHC physics programme? What is the role of R&D; and technology transfer at CERN?

  2. Dissemination of CERN's Technology Transfer: Added Value from Regional Transfer Agents

    ERIC Educational Resources Information Center

    Hofer, Franz

    2005-01-01

    Technologies developed at CERN, the European Organization for Nuclear Research, are disseminated via a network of external technology transfer officers. Each of CERN's 20 member states has appointed at least one technology transfer officer to help establish links with CERN. This network has been in place since 2001 and early experiences indicate…

  3. 8th International Symposium on Quantum Theory and Symmetries (QTS8)

    NASA Astrophysics Data System (ADS)

    Bijker, Roelof; Krötzsch, Guillermo; Rosas-Ortiz, Óscar; Wolf, Kurt Bernardo

    2014-05-01

    The Quantum Theory and Symmetries (QTS) international symposia are periodic biannual meetings of the mathematical physics community with special interest in the methods of group theory in their many incarnations, particularly in the symmetries that arise in quantum systems. The QTSs alternate with the International Colloquia on Group Theoretical Methods in Physics since 1999, when Professor Heinz-Dietrich Doebner organized the first one in Goslar, Germany. Subsequent symposia were held in Krakóow, Poland (2001), Cincinnati, USA (2003), Varna, Bulgaria (2005), Valladolid, Spain (2007), Lexington, USA (2009), and Praha, Czech Republic (2011); the eighth QTS was awarded to Mexico (2013), and the next one (2015) will take place in Yerevan, Armenia. Conference photograph Further details, including committees and members, are available in the PDF

  4. ATLAS computing on Swiss Cloud SWITCHengines

    NASA Astrophysics Data System (ADS)

    Haug, S.; Sciacca, F. G.; ATLAS Collaboration

    2017-10-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  5. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  6. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  7. Radiation Hard Sensors for Surveillance.

    DTIC Science & Technology

    1988-03-11

    track position measurements were noted. E. Heijne (CERN) reported on the degradation of silicon detectors for doses larger than 2x10 11 muons /cm 2...Workshop on Transmission and Emission Computerized Tomography , July 1978, Seoul, Korea Nahmias C., Kenyon D.B., Garnett E.S.: Optimization of...crystal size in emission computed tomography . IEEE Trans ,.-.e Nucl Sci NS-27: 529-532, 1980. Mullani N.A., Ficke D.C., Ter-Pogossian M.M.: Cesium Fluoride

  8. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    NASA Astrophysics Data System (ADS)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    Helix Nebula has established a growing public private partnership of more than 30 commercial cloud providers, SMEs, and publicly funded research organisations and e-infrastructures. The Helix Nebula strategy is to establish a federated cloud service across Europe. Three high-profile flagships, sponsored by CERN (high energy physics), EMBL (life sciences) and ESA/DLR/CNES/CNR (earth science), have been deployed and extensively tested within this federated environment. The commitments behind these initial flagships have created a critical mass that attracts suppliers and users to the initiative, to work together towards an "Information as a Service" market place. Significant progress in implementing the following 4 programmatic goals (as outlined in the strategic Plan Ref.1) has been achieved: × Goal #1 Establish a Cloud Computing Infrastructure for the European Research Area (ERA) serving as a platform for innovation and evolution of the overall infrastructure. × Goal #2 Identify and adopt suitable policies for trust, security and privacy on a European-level can be provided by the European Cloud Computing framework and infrastructure. × Goal #3 Create a light-weight governance structure for the future European Cloud Computing Infrastructure that involves all the stakeholders and can evolve over time as the infrastructure, services and user-base grows. × Goal #4 Define a funding scheme involving the three stake-holder groups (service suppliers, users, EC and national funding agencies) into a Public-Private-Partnership model to implement a Cloud Computing Infrastructure that delivers a sustainable business environment adhering to European level policies. Now in 2014 a first version of this generic cross-domain e-infrastructure is ready to go into operations building on federation of European industry and contributors (data, tools, knowledge, ...). This presentation describes how Helix Nebula is being used in the domain of earth science focusing on geohazards. The so called "Supersite Exploitation Platform" (SSEP) provides scientists an overarching federated e-infrastructure with a very fast access to (i) large volume of data (EO/non-space data), (ii) computing resources (e.g. hybrid cloud/grid), (iii) processing software (e.g. toolboxes, RTMs, retrieval baselines, visualization routines), and (iv) general platform capabilities (e.g. user management and access control, accounting, information portal, collaborative tools, social networks etc.). In this federation each data provider remains in full control of the implementation of its data policy. This presentation outlines the Architecture (technical and services) supporting very heterogeneous science domains as well as the procedures for new-comers to join the Helix Nebula Market Place. Ref.1 http://cds.cern.ch/record/1374172/files/CERN-OPEN-2011-036.pdf

  9. The LHC timeline: a personal recollection (1980-2012)

    NASA Astrophysics Data System (ADS)

    Maiani, Luciano; Bonolis, Luisa

    2017-12-01

    The objective of this interview is to study the history of the Large Hadron Collider in the LEP tunnel at CERN, from first ideas to the discovery of the Brout-Englert-Higgs boson, seen from the point of view of a member of CERN scientific committees, of the CERN Council and a former Director General of CERN in the years of machine construction.

  10. WLCG and IPv6 - The HEPiX IPv6 working group

    DOE PAGES

    Campana, S.; K. Chadwick; Chen, G.; ...

    2014-06-11

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applyingmore » for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). Furthermore, this paper describes the work done by the working group and its future plans.« less

  11. WLCG and IPv6 - the HEPiX IPv6 working group

    NASA Astrophysics Data System (ADS)

    Campana, S.; Chadwick, K.; Chen, G.; Chudoba, J.; Clarke, P.; Eliáš, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; Grigoras, C.; Hoeft, B.; Kelsey, D. P.; Kouba, T.; López Muñoz, F.; Martelli, E.; Mitchell, M.; Nairz, A.; Ohrenberg, K.; Pfeiffer, A.; Prelz, F.; Qi, F.; Rand, D.; Reale, M.; Rozsa, S.; Sciaba, A.; Voicu, R.; Walker, C. J.; Wildish, T.

    2014-06-01

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.

  12. QM2017: Status and Key open Questions in Ultra-Relativistic Heavy-Ion Physics

    NASA Astrophysics Data System (ADS)

    Schukraft, Jurgen

    2017-11-01

    Almost exactly 3 decades ago, in the fall of 1986, the era of experimental ultra-relativistic E / m ≫ 1) heavy ion physics started simultaneously at the SPS at CERN and the AGS at Brookhaven with first beams of light Oxygen ions at fixed target energies of 200 GeV/A and 14.6 GeV/A, respectively. The event was announced by CERN [CERN's subatomic particle accelerators: Set up world-record in energy and break new ground for physics (CERN-PR-86-11-EN) (1986) 4 p, issued on 29 September 1986. URL (http://cds.cern.ch/record/855571)

  13. Update on CERN Search based on SharePoint 2013

    NASA Astrophysics Data System (ADS)

    Alvarez, E.; Fernandez, S.; Lossent, A.; Posada, I.; Silva, B.; Wagner, A.

    2017-10-01

    CERN’s enterprise Search solution “CERN Search” provides a central search solution for users and CERN service providers. A total of about 20 million public and protected documents from a wide range of document collections is indexed, including Indico, TWiki, Drupal, SharePoint, JACOW, E-group archives, EDMS, and CERN Web pages. In spring 2015, CERN Search was migrated to a new infrastructure based on SharePoint 2013. In the context of this upgrade, the document pre-processing and indexing process was redesigned and generalised. The new data feeding framework allows to profit from new functionality and it facilitates the long term maintenance of the system.

  14. Experience from the 1st Year running a Massive High Quality Videoconferencing Service for the LHC

    NASA Astrophysics Data System (ADS)

    Fernandes, Joao; Baron, Thomas; Bompastor, Bruno

    2014-06-01

    In the last few years, we have witnessed an explosion of visual collaboration initiatives in the industry. Several advances in video services and also in their underlying infrastructure are currently improving the way people collaborate globally. These advances are creating new usage paradigms: any device in any network can be used to collaborate, in most cases with an overall high quality. To keep apace with this technology progression, the CERN IT Department launched a service based on the Vidyo product. This new service architecture introduces Adaptive Video Layering, which dynamically optimizes the video for each endpoint by leveraging the H.264 Scalable Video Coding (SVC)-based compression technology. It combines intelligent AV routing techniques with the flexibility of H.264 SVC video compression, in order to achieve resilient video collaboration over the Internet, 3G and WiFi. We present an overview of the results that have been achieved after this major change. In particular, the first year of operation of the CERN Vidyo service will be described in terms of performance and scale: The service became part of the daily activity of the LHC collaborations, reaching a monthly usage of more than 3200 meetings with a peak of 750 simultaneous connections. We also present some key features such as the integration with CERN Indico. LHC users can now join a Vidyo meeting either from their personal computer or a CERN videoconference room simply from an Indico event page, with the ease of a single click. The roadmap for future improvements, service extensions and core infrastructure tendencies such as cloud based services and virtualization of system components will also be discussed. Vidyo's strengths allowed us to build a universal service (it is accessible from PCs, but also videoconference rooms, traditional phones, tablets and smartphones), developed with 3 key ideas in mind: ease of use, full integration and high quality.

  15. Current experiments in elementary particle physics. Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galic, H.; Armstrong, F.E.; von Przewoski, B.

    1994-08-01

    This report contains summaries of 568 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1988 are excluded. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, INS (Tokyo), ITEP (Moscow), IUCF (Bloomington), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.

  16. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  17. Numerical simulation of electromagnetic fields and impedance of CERN LINAC4 H(-) source taking into account the effect of the plasma.

    PubMed

    Grudiev, A; Lettry, J; Mattei, S; Paoluzzi, M; Scrivens, R

    2014-02-01

    Numerical simulation of the CERN LINAC4 H(-) source 2 MHz RF system has been performed taking into account a realistic geometry from 3D Computer Aided Design model using commercial FEM high frequency simulation code. The effect of the plasma has been added to the model by the approximation of a homogenous electrically conducting medium. Electric and magnetic fields, RF power losses, and impedance of the circuit have been calculated for different values of the plasma conductivity. Three different regimes have been found depending on the plasma conductivity: (1) Zero or low plasma conductivity results in RF electric field induced by the RF antenna being mainly capacitive and has axial direction; (2) Intermediate conductivity results in the expulsion of capacitive electric field from plasma and the RF power coupling, which is increasing linearly with the plasma conductivity, is mainly dominated by the inductive azimuthal electric field; (3) High conductivity results in the shielding of both the electric and magnetic fields from plasma due to the skin effect, which reduces RF power coupling to plasma. From these simulations and measurements of the RF power coupling on the CERN source, a value of the plasma conductivity has been derived. It agrees well with an analytical estimate calculated from the measured plasma parameters. In addition, the simulated and measured impedances with and without plasma show very good agreement as well demonstrating validity of the plasma model used in the RF simulations.

  18. Big Bang Day: The Making of CERN (Episode 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-10-06

    A two-part history of the CERN project. Quentin Cooper explores the fifty-year history of CERN, the European particle physics laboratory in Switzerland. The institution was created to bring scientists together after WW2 .......

  19. Big Bang Day: The Making of CERN (Episode 1)

    ScienceCinema

    None

    2017-12-09

    A two-part history of the CERN project. Quentin Cooper explores the fifty-year history of CERN, the European particle physics laboratory in Switzerland. The institution was created to bring scientists together after WW2 .......

  20. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    NASA Astrophysics Data System (ADS)

    Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.

    2011-06-01

    A new stable version ("production version") v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions ("patch releases") will be published [4] to solve problems reported with this version. New version program summaryProgram title: ROOT Catalogue identifier: AEFA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser Public License v.2.1 No. of lines in distributed program, including test data, etc.: 2 934 693 No. of bytes in distributed program, including test data, etc.: 1009 Distribution format: tar.gz Programming language: C++ Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISC Operating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIX Has the code been vectorized or parallelized?: Yes RAM: > 55 Mbytes Classification: 4, 9, 11.9, 14 Catalogue identifier of previous version: AEFA_v1_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499 Does the new version supersede the previous version?: Yes Nature of problem: Storage, analysis and visualization of scientific data Solution method: Object store, wide range of analysis algorithms and visualization methods Reasons for new version: Added features and corrections of deficiencies Summary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space. Histograms A new TEfficiency class has been provided to handle the calculation of efficiencies and their uncertainties, TH2Poly for polygon-shaped bins (e.g. maps), TKDE for kernel density estimation, and TSVDUnfold for singular value decomposition. Graphics Kerning is now supported in TLatex, PostScript and PDF; a table of contents can be added to PDF files. A new font provides italic symbols. A TPad containing GL can be stored in a binary (i.e. non-vector) image file; add support for full-scene anti-aliasing. Usability enhancements to EVE. Math New interfaces for generating random number according to a given distribution, goodness of fit tests of unbinned data, binning multidimensional data, and several advanced statistical functions were added. RooFit Introduction of HistFactory; major additions to RooStats. TMVA Updated to version 4.1.0, adding e.g. the support for simultaneous classification of multiple output classes for several multivariate methods. PROOF Many new features, adding to PROOF's usability, plus improvements and fixes. PyROOT Support of Python 3 has been added. Tutorials Several new tutorials were provided for above new features (notably RooStats). A detailed list of all the changes is available at http://root.cern.ch/root/htmldoc/examples/V5. Additional comments: For an up-to-date author list see: http://root.cern.ch/drupal/content/root-development-team and http://root.cern.ch/drupal/content/former-root-developers. The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Depending on the data size and complexity of analysis algorithms. References: id="pr0100" view="all">http://root.cern.ch. http://root.cern.ch/drupal/content/production-version-528. I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, Ph. Canal, D. Casadei, O. Couet, V. Fine, L. Franco, G. Ganis, A. Gheata, D. Gonzalez Maline, M. Goto, J. Iwaszkiewicz, A. Kreshuk, D. Marcos Segura, R. Maunder, L. Moneta, A. Naumann, E. Offermann, V. Onuchin, S. Panacek, F. Rademakers, P. Russo, M. Tadel, ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499. http://root.cern.ch/drupal/content/root-version-v5-28-00-patch-release-notes.

  1. CERN welcomes new members

    NASA Astrophysics Data System (ADS)

    2017-08-01

    Lithuania is on course to become an associate member of CERN, pending final approval by the Lithuanian parliament. Associate membership will allow representatives of the Baltic nation to take part in meetings of the CERN Council, which oversees the Geneva-based physics lab.

  2. Agile Infrastructure Monitoring

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Ascenso, J.; Fedorko, I.; Fiorini, B.; Paladin, M.; Pigueiras, L.; Santos, M.

    2014-06-01

    At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new "shared monitoring architecture" which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.

  3. Technological Forum

    ScienceCinema

    None

    2018-05-18

    Part 1: Mr. Thievent from the Swiss Association of Standardization, as well as Mr. Alleyn, head of technical education at CERN, speak followed by a discussion (the questions are not audible, whistling...) Part 2: Report from Zürrer, president of the European Committee of Standardization, followed by a discussion. Part 3: Work Groups (round table) with 3 presenters: Hekimi, General Secretary of the European Computer Manufacturing Association, Corthesy, head of the Office of Standardization of Lausanne, Reymond, head of the Office of Standardization EBC Secheron at Geneva, followed by a discussion.

  4. Current Experiments in Particle Physics. 1996 Edition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galic, Hrvoje

    2003-06-27

    This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries.

  5. Current experiments in elementary particle physics. Revised

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galic, H.; Wohl, C.G.; Armstrong, B.

    This report contains summaries of 584 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1986 are excluded. Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, KEK, LAMPF, Novosibirsk, Paul Scherrer Institut (PSI), Saclay, Serpukhov, SLAC, SSCL, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.

  6. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  7. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  8. Sharing scientific discovery globally: toward a CERN virtual visit service

    NASA Astrophysics Data System (ADS)

    Goldfarb, S.; Hatzifotiadou, D.; Lapka, M.; Papanestis, A.

    2017-10-01

    The installation of virtual visit services by the LHC collaborations began shortly after the first high-energy collisions were provided by the CERN accelerator in 2010. The experiments: ATLAS [1], CMS [2], LHCb [3], and ALICE [4] have all joined in this popular and effective method to bring the excitement of scientific exploration and discovery into classrooms and other public venues around the world. Their programmes, which use a combination of video conference, webcast, and video recording to communicate with remote audiences have already reached tens of thousands of viewers, and the demand only continues to grow. Other venues, such as the CERN Control Centre, are also considering similar permanent installations. We present a summary of the development of the various systems in use around CERN today, including the technology deployed and a variety of use cases. We then lay down the arguments for the creation of a CERN-wide service that would support these programmes in a more coherent and effective manner. Potential services include a central booking system and operational management similar to what is currently provided for the common CERN video conference facilities. Certain choices in technology could be made to support programmes based on popular tools including (but not limited to) Skype™ [5], Google Hangouts [6], Facebook Live [7], and Periscope [8]. Successful implementation of the project, which relies on close partnership between the experiments, CERN IT CDA [9], and CERN IR ECO [10], has the potential to reach an even larger, global audience, more effectively than ever before.

  9. MCdevelop - a universal framework for Stochastic Simulations

    NASA Astrophysics Data System (ADS)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.

  10. Learning with the ATLAS Experiment at CERN

    ERIC Educational Resources Information Center

    Barnett, R. M.; Johansson, K. E.; Kourkoumelis, C.; Long, L.; Pequenao, J.; Reimers, C.; Watkins, P.

    2012-01-01

    With the start of the LHC, the new particle collider at CERN, the ATLAS experiment is also providing high-energy particle collisions for educational purposes. Several education projects--education scenarios--have been developed and tested on students and teachers in several European countries within the Learning with ATLAS@CERN project. These…

  11. First experience with the new .cern Top Level Domain

    NASA Astrophysics Data System (ADS)

    Alvarez, E.; Malo de Molina, M.; Salwerowicz, M.; Silva De Sousa, B.; Smith, T.; Wagner, A.

    2017-10-01

    In October 2015, CERN’s core website has been moved to a new address, http://home.cern, marking the launch of the brand new top-level domain .cern. In combination with a formal governance and registration policy, the IT infrastructure needed to be extended to accommodate the hosting of Web sites in this new top level domain. We will present the technical implementation in the framework of the CERN Web Services that allows to provide virtual hosting, a reverse proxy solution and that also includes the provisioning of SSL server certificates for secure communications.

  12. The interview with a patient on dialysis: feeling, emotions and fears.

    PubMed

    Brunori, Francesco; Dozio, Beatrice; Colzani, Sara; Pozzi, Marco; Pisano, Lucia; Galassi, Andrea; Santorelli, Gennaro; Auricchio, Sara; Busnelli, Luisa; Di Carlo, Angela; Viganò, Monica; Calabrese, Valentina; Mariani, Laura; Mossa, Monica; Longoni, Stefania; Scanziani, Renzo

    2016-01-01

    This study has been performed in the Nephrology and Dialysis Unit, in Desio Hospital, Italy. The aim of this study is to evaluate, starting from research questions, which information is given to patient in the pre-dialysis colloquia for his/her chosen dialysis methods. Moreover, the study evaluated feelings, emotions and fears since the announcement of the necessity of dialysis treatment. The objective of the study was reached through the interview with patients on dialysis. The fact-finding survey was based on the tools of social research, as the semi-structured interview. Instead of using the questionnaire, even though it make it easier to collect larger set of data, the Authors decided to interview patients in person, since the interview allows direct patient contact and to build a relationship of trust with the interviewer, in order to allow patient explain better his/her feeling.

  13. Using Supercomputers to Probe the Early Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giorgi, Elena Edi

    For decades physicists have been trying to decipher the first moments after the Big Bang. Using very large telescopes, for example, scientists scan the skies and look at how fast galaxies move. Satellites study the relic radiation left from the Big Bang, called the cosmic microwave background radiation. And finally, particle colliders, like the Large Hadron Collider at CERN, allow researchers to smash protons together and analyze the debris left behind by such collisions. Physicists at Los Alamos National Laboratory, however, are taking a different approach: they are using computers. In collaboration with colleagues at University of California San Diego,more » the Los Alamos researchers developed a computer code, called BURST, that can simulate conditions during the first few minutes of cosmological evolution.« less

  14. LCG MCDB—a knowledgebase of Monte-Carlo simulated events

    NASA Astrophysics Data System (ADS)

    Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.

    2008-02-01

    In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.

  15. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2018-05-15

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network Constituents, Fundamental Forces and Symmetries of the Universe. The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva.

  16. CERN@school: bringing CERN into the classroom

    NASA Astrophysics Data System (ADS)

    Whyntie, T.; Cook, J.; Coupe, A.; Fickling, R. L.; Parker, B.; Shearer, N.

    2016-04-01

    CERN@school brings technology from CERN into the classroom to aid with the teaching of particle physics. It also aims to inspire the next generation of physicists and engineers by giving participants the opportunity to be part of a national collaboration of students, teachers and academics, analysing data obtained from detectors based on the ground and in space to make new, curiosity-driven discoveries at school. CERN@school is based around the Timepix hybrid silicon pixel detector developed by the Medipix 2 Collaboration, which features a 300 μm thick silicon sensor bump-bonded to a Timepix readout ASIC. This defines a 256-by-256 grid of pixels with a pitch of 55 μm, the data from which can be used to visualise ionising radiation in a very accessible way. Broadly speaking, CERN@school consists of a web portal that allows access to data collected by the Langton Ultimate Cosmic ray Intensity Detector (LUCID) experiment in space and the student-operated Timepix detectors on the ground; a number of Timepix detector kits for ground-based experiments, to be made available to schools for both teaching and research purposes; and educational resources for teachers to use with LUCID data and detector kits in the classroom. By providing access to cutting-edge research equipment, raw data from ground and space-based experiments, CERN@school hopes to provide the foundation for a programme that meets the many of the aims and objectives of CERN and the project's supporting academic and industrial partners. The work presented here provides an update on the status of the programme as supported by the UK Science and Technology Facilities Council (STFC) and the Royal Commission for the Exhibition of 1851. This includes recent results from work with the GridPP Collaboration on using grid resources with schools to run GEANT4 simulations of CERN@school experiments.

  17. News Conference: Serbia hosts teachers' seminar Resources: Teachers TV website closes for business Festival: Science takes to the stage in Denmark Research: How noise affects learning in secondary schools CERN: CERN visit inspires new teaching ideas Education: PLS aims to improve perception of science for school students Conference: Scientix conference discusses challenges in science education

    NASA Astrophysics Data System (ADS)

    2011-07-01

    Conference: Serbia hosts teachers' seminar Resources: Teachers TV website closes for business Festival: Science takes to the stage in Denmark Research: How noise affects learning in secondary schools CERN: CERN visit inspires new teaching ideas Education: PLS aims to improve perception of science for school students Conference: Scientix conference discusses challenges in science education

  18. News Particle Physics: ATLAS unveils mural at CERN Prize: Corti Trust invites essay entries Astrophysics: CERN holds cosmic-ray conference Researchers in Residence: Lord Winston returns to school Music: ATLAS scientists record physics music Conference: Champagne flows at Reims event Competition: Students triumph at physics olympiad Teaching: Physics proves popular in Japanese schools Forthcoming Events

    NASA Astrophysics Data System (ADS)

    2011-01-01

    Particle Physics: ATLAS unveils mural at CERN Prize: Corti Trust invites essay entries Astrophysics: CERN holds cosmic-ray conference Researchers in Residence: Lord Winston returns to school Music: ATLAS scientists record physics music Conference: Champagne flows at Reims event Competition: Students triumph at physics olympiad Teaching: Physics proves popular in Japanese schools Forthcoming Events

  19. Signature CERN-URSS

    ScienceCinema

    None

    2017-12-09

    Le DG W.Jentschke souhaite la bienvenue à l'assemblée et aux invités pour la signature du protocole entre le Cern et l'URSS qui est un événement important. C'est en 1955 que 55 visiteurs soviétiques ont visité le Cern pour la première fois. Le premier DG au Cern, F.Bloch, et Mons.Amaldi sont aussi présents. Tandis que le discours anglais de W.Jentschke est traduit en russe, le discours russe de Mons.Morozov est traduit en anglais.

  20. Technologies for Large Data Management in Scientific Computing

    NASA Astrophysics Data System (ADS)

    Pace, Alberto

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  1. GSDC: A Unique Data Center in Korea for HEP research

    NASA Astrophysics Data System (ADS)

    Ahn, Sang-Un

    2017-04-01

    Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.

  2. Hands on CERN: A Well-Used Physics Education Project

    ERIC Educational Resources Information Center

    Johansson, K. E.

    2006-01-01

    The "Hands on CERN" education project makes it possible for students and teachers to get close to the forefront of scientific research. The project confronts the students with contemporary physics at its most fundamental level with the help of particle collisions from the DELPHI particle physics experiment at CERN. It now exists in 14 languages…

  3. Web Proxy Auto Discovery for the WLCG

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.

  4. Web Proxy Auto Discovery for the WLCG

    DOE PAGES

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...

    2017-11-23

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  5. Web Proxy Auto Discovery for the WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  6. Current experiments in elementary particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohl, C.G.; Armstrong, F.E., Oyanagi, Y.; Dodder, D.C.

    1987-03-01

    This report contains summaries of 720 recent and current experiments in elementary particle physics (experiments that finished taking data before 1980 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Moscow Institute of Theoretical and Experimental Physics, Tokyo Institute of Nuclear Studies, KEK, LAMPF, Leningrad Nuclear Physics Institute, Saclay, Serpukhov, SIN, SLAC, and TRIUMF, and also experiments on proton decay. Instructions are given for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.

  7. GPU real-time processing in NA62 trigger system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-01-01

    A commercial Graphics Processing Unit (GPU) is used to build a fast Level 0 (L0) trigger system tested parasitically with the TDAQ (Trigger and Data Acquisition systems) of the NA62 experiment at CERN. In particular, the parallel computing power of the GPU is exploited to perform real-time fitting in the Ring Imaging CHerenkov (RICH) detector. Direct GPU communication using a FPGA-based board has been used to reduce the data transmission latency. The performance of the system for multi-ring reconstrunction obtained during the NA62 physics run will be presented.

  8. Interoperating Cloud-based Virtual Farms

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.

    2015-12-01

    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.

  9. 25th Birthday Cern- Amphi

    ScienceCinema

    None

    2017-12-09

    Cérémonie du 25ème anniversaire du Cern avec 2 orateurs: le Prof.Weisskopf parle de la signification et le rôle du Cern et le Prof.Casimir(?) fait un exposé sur les rélations entre la science pure et la science appliquée et la "big science" (science légère)

  10. Opportunities and choice in a new vector era

    NASA Astrophysics Data System (ADS)

    Nowak, A.

    2014-06-01

    This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.

  11. INTEGRATED OPERATIONAL DOSIMETRY SYSTEM AT CERN.

    PubMed

    Dumont, Gérald; Pedrosa, Fernando Baltasar Dos Santos; Carbonez, Pierre; Forkel-Wirth, Doris; Ninin, Pierre; Fuentes, Eloy Reguero; Roesler, Stefan; Vollaire, Joachim

    2017-04-01

    CERN, the European Organization for Nuclear Research, upgraded its operational dosimetry system in March 2013 to be prepared for the first Long Shutdown of CERN's facilities. The new system allows the immediate and automatic checking and recording of the dosimetry data before and after interventions in radiation areas. To facilitate the analysis of the data in context of CERN's approach to As Low As Reasonably Achievable (ALARA), this new system is interfaced to the Intervention Management Planning and Coordination Tool (IMPACT). IMPACT is a web-based application widely used in all CERN's accelerators and their associated technical infrastructures for the planning, the coordination and the approval of interventions (work permit principle). The coupling of the operational dosimetry database with the IMPACT repository allows a direct and almost immediate comparison of the actual dose with the estimations, in addition to enabling the configuration of alarm levels in the dosemeter in function of the intervention to be performed. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments

    NASA Astrophysics Data System (ADS)

    Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.

    2012-12-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.

  13. HIGH ENERGY PHYSICS: Bulgarians Sue CERN for Leniency.

    PubMed

    Koenig, R

    2000-10-13

    In cash-strapped Bulgaria, scientists are wondering whether a ticket for a front-row seat in high-energy physics is worth the price: Membership dues in CERN, the European particle physics lab, nearly equal the country's entire budget for competitive research grants. Faced with that grim statistic and a plea for leniency from Bulgaria's government, CERN's governing council is considering slashing the country's membership dues for the next 2 years.

  14. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    McAllister, Liam

    2018-05-14

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  15. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2018-05-22

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons.Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  16. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2018-06-28

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  17. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2018-05-23

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  18. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2017-12-09

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  19. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    McAllister, Liam

    2018-05-24

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe";. The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions".This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde. Local organizers: A. Uranga, J. Walcher

  20. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    Sen, Ashoke

    2018-04-27

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network". The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher.

  1. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2018-05-23

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe";. The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  2. CERN-derived analysis of lunar radiation backgrounds

    NASA Technical Reports Server (NTRS)

    Wilson, Thomas L.; Svoboda, Robert

    1993-01-01

    The Moon produces radiation which background-limits scientific experiments there. Early analyses of these backgrounds have either failed to take into consideration the effect of charm in particle physics (because they pre-dated its discovery), or have used branching ratios which are no longer strictly valid (due to new accelerator data). We are presently investigating an analytical program for deriving muon and neutrino spectra generated by the Moon, converting an existing CERN computer program known as GEANT which does the same for the Earth. In so doing, this will (1) determine an accurate prompt neutrino spectrum produced by the lunar surface; (2) determine the lunar subsurface particle flux; (3) determine the consequence of charm production physics upon the lunar background radiation environment; and (4) provide an analytical tool for the NASA astrophysics community with which to begin an assessment of the Moon as a scientific laboratory versus its particle radiation environment. This will be done on a recurring basis with the latest experimental results of the particle data groups at Earth-based high-energy accelerators, in particular with the latest branching ratios for charmed meson decay. This will be accomplished for the first time as a full 3-dimensional simulation.

  3. Vidyo@CERN: A Service Update

    NASA Astrophysics Data System (ADS)

    Fernandes, J.; Baron, T.

    2015-12-01

    We will present an overview of the current real-time video service offering for the LHC, in particular the operation of the CERN Vidyo service will be described in terms of consolidated performance and scale: The service is an increasingly critical part of the daily activity of the LHC collaborations, topping recently more than 50 million minutes of communication in one year, with peaks of up to 852 simultaneous connections. We will elaborate on the improvement of some front-end key features such as the integration with CERN Indico, or the enhancements of the Unified Client and also on new ones, released or in the pipeline, such as a new WebRTC client and CERN SSO/Federated SSO integration. An overview of future infrastructure improvements, such as virtualization techniques of Vidyo routers and geo-location mechanisms for load-balancing and optimum user distribution across the service infrastructure will also be discussed. The work done by CERN to improve the monitoring of its Vidyo network will also be presented and demoed. As a last point, we will touch the roadmap and strategy established by CERN and Vidyo with a clear objective of optimizing the service both on the end client and backend infrastructure to make it truly universal, to serve Global Science. To achieve those actions, the introduction of the multitenant concept to serve different communities is needed. This is one of the consequences of CERN's decision to offer the Vidyo service currently operated for the LHC, to other Sciences, Institutions and Virtual Organizations beyond HEP that might express interest for it.

  4. Public Lecture

    ScienceCinema

    None

    2017-12-09

    An outreach activity is being organized by the Turkish community at CERN, on 5 June 2010 at CERN Main Auditorium. The activity consists of several talks that will take 1.5h in total. The main goal of the activity will be describing the CERN based activities and experiments as well as stimulating the public's attention to the science related topics. We believe the wide communication of the event has certain advantages especially for the proceeding membership process of Turkey.

  5. Prospects for observation at CERN in NA62

    NASA Astrophysics Data System (ADS)

    Hahn, F.; NA62 Collaboration; Aglieri Rinella, G.; Aliberti, R.; Ambrosino, F.; Angelucci, B.; Antonelli, A.; Anzivino, G.; Arcidiacono, R.; Azhinenko, I.; Balev, S.; Bendotti, J.; Biagioni, A.; Biino, C.; Bizzeti, A.; Blazek, T.; Blik, A.; Bloch-Devaux, B.; Bolotov, V.; Bonaiuto, V.; Bragadireanu, M.; Britton, D.; Britvich, G.; Brook, N.; Bucci, F.; Butin, F.; Capitolo, E.; Capoccia, C.; Capussela, T.; Carassiti, V.; Cartiglia, N.; Cassese, A.; Catinaccio, A.; Cecchetti, A.; Ceccucci, A.; Cenci, P.; Cerny, V.; Cerri, C.; Chikilev, O.; Ciaranfi, R.; Collazuol, G.; Cooke, P.; Cooper, P.; Corradi, G.; Cortina Gil, E.; Costantini, F.; Cotta Ramusino, A.; Coward, D.; D'Agostini, G.; Dainton, J.; Dalpiaz, P.; Danielsson, H.; Degrange, J.; De Simone, N.; Di Filippo, D.; Di Lella, L.; Dixon, N.; Doble, N.; Duk, V.; Elsha, V.; Engelfried, J.; Enik, T.; Falaleev, V.; Fantechi, R.; Federici, L.; Fiorini, M.; Fry, J.; Fucci, A.; Fulton, L.; Gallorini, S.; Gatignon, L.; Gianoli, A.; Giudici, S.; Glonti, L.; Goncalves Martins, A.; Gonnella, F.; Goudzovski, E.; Guida, R.; Gushchin, E.; Hahn, F.; Hallgren, B.; Heath, H.; Herman, F.; Hutchcroft, D.; Iacopini, E.; Jamet, O.; Jarron, P.; Kampf, K.; Kaplon, J.; Karjavin, V.; Kekelidze, V.; Kholodenko, S.; Khoriauli, G.; Khudyakov, A.; Kiryushin, Yu; Kleinknecht, K.; Kluge, A.; Koval, M.; Kozhuharov, V.; Krivda, M.; Kudenko, Y.; Kunze, J.; Lamanna, G.; Lazzeroni, C.; Leitner, R.; Lenci, R.; Lenti, M.; Leonardi, E.; Lichard, P.; Lietava, R.; Litov, L.; Lomidze, D.; Lonardo, A.; Lurkin, N.; Madigozhin, D.; Maire, G.; Makarov, A.; Mannelli, I.; Mannocchi, G.; Mapelli, A.; Marchetto, F.; Massarotti, P.; Massri, K.; Matak, P.; Mazza, G.; Menichetti, E.; Mirra, M.; Misheva, M.; Molokanova, N.; Morant, J.; Morel, M.; Moulson, M.; Movchan, S.; Munday, D.; Napolitano, M.; Newson, F.; Norton, A.; Noy, M.; Nuessle, G.; Obraztsov, V.; Padolski, S.; Page, R.; Palladino, V.; Pardons, A.; Pedreschi, E.; Pepe, M.; Perez Gomez, F.; Perrin-Terrin, M.; Petrov, P.; Petrucci, F.; Piandani, R.; Piccini, M.; Pietreanu, D.; Pinzino, J.; Pivanti, M.; Polenkevich, I.; Popov, I.; Potrebenikov, Yu; Protopopescu, D.; Raffaelli, F.; Raggi, M.; Riedler, P.; Romano, A.; Rubin, P.; Ruggiero, G.; Russo, V.; Ryjov, V.; Salamon, A.; Salina, G.; Samsonov, V.; Santovetti, E.; Saracino, G.; Sargeni, F.; Schifano, S.; Semenov, V.; Sergi, A.; Serra, M.; Shkarovskiy, S.; Sotnikov, A.; Sougonyaev, V.; Sozzi, M.; Spadaro, T.; Spinella, F.; Staley, R.; Statera, M.; Sutcliffe, P.; Szilasi, N.; Tagnani, D.; Valdata-Nappi, M.; Valente, P.; Vasile, M.; Vassilieva, V.; Velghe, B.; Veltri, M.; Venditti, S.; Vormstein, M.; Wahl, H.; Wanke, R.; Wertelaers, P.; Winhart, A.; Winston, R.; Wrona, B.; Yushchenko, O.; Zamkovsky, M.; Zinchenko, A.

    2015-07-01

    The rare decays are excellent processes to probe the Standard Model and indirectly search for new physics complementary to the direct LHC searches. The NA62 experiment at CERN SPS aims to collect and analyse O(1013) kaon decays before the CERN long-shutdown 2 (in 2018). This will allow to measure the branching ratio to a level of 10% accuracy. The experimental apparatus has been commissioned during a first run in autumn 2014.

  6. The trigger system for K0→2 π0 decays of the NA48 experiment at CERN

    NASA Astrophysics Data System (ADS)

    Mikulec, I.

    1998-02-01

    A fully pipelined 40 MHz "dead-time-free" trigger system for neutral K0 decays for the NA48 experiment at CERN is described. The NA48 experiment studies CP-violation using the high intensity beam of the CERN SPS accelerator. The trigger system sums, digitises, filters and processes signals from 13 340 channels of the liquid krypton electro-magnetic calorimeter. In 1996 the calorimeter and part of the trigger electronics were installed and tested. In 1997 the system was completed and prepared to be used in the first NA48 physics data taking period. Cagliari, Cambridge, CERN, Dubna, Edinburgh, Ferrara, Firenze, Mainz, Orsay, Perugia, Pisa, Saclay, Siegen, Torino, Warszawa, Wien Collaboration.

  7. Exploiting analytics techniques in CMS computing monitoring

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.

    2017-10-01

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  8. Current experiments in elementary particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohl, C.G.; Armstrong, F.E.; Trippe, T.G.

    1989-09-01

    This report contains summaries of 736 current and recent experiments in elementary particle physics (experiments that finished taking data before 1982 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, Joint Institute for Nuclear Research (Dubna), KEK, LAMPF, Novosibirsk, PSI/SIN, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground experiments. Also given are instructions for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.

  9. Flowgen: Flowchart-based documentation for C + + codes

    NASA Astrophysics Data System (ADS)

    Kosower, David A.; Lopez-Villarejo, J. J.

    2015-11-01

    We present the Flowgen tool, which generates flowcharts from annotated C + + source code. The tool generates a set of interconnected high-level UML activity diagrams, one for each function or method in the C + + sources. It provides a simple and visual overview of complex implementations of numerical algorithms. Flowgen is complementary to the widely-used Doxygen documentation tool. The ultimate aim is to render complex C + + computer codes accessible, and to enhance collaboration between programmers and algorithm or science specialists. We describe the tool and a proof-of-concept application to the VINCIA plug-in for simulating collisions at CERN's Large Hadron Collider.

  10. 2016 Research Outreach Program report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hye Young; Kim, Yangkyu

    2016-10-13

    This paper is the research activity report for 4 weeks in LANL. Under the guidance of Dr. Lee, who performs nuclear physics research at LANSCE, LANL, I studied the Low Energy NZ (LENZ) setup and how to use the LENZ. First, I studied the LENZ chamber and Si detectors, and worked on detector calibrations, using the computer software, ROOT (CERN developed data analysis tool) and EXCEL (Microsoft office software). I also performed the calibration experiments that measure alpha particles emitted from a Th-229 source by using a S1-type detector (Si detector). And with Dr. Lee, we checked the result.

  11. THERMINATOR: THERMal heavy-IoN generATOR

    NASA Astrophysics Data System (ADS)

    Kisiel, Adam; Tałuć, Tomasz; Broniowski, Wojciech; Florkowski, Wojciech

    2006-04-01

    THERMINATOR is a Monte Carlo event generator designed for studying of particle production in relativistic heavy-ion collisions performed at such experimental facilities as the SPS, RHIC, or LHC. The program implements thermal models of particle production with single freeze-out. It performs the following tasks: (1) generation of stable particles and unstable resonances at the chosen freeze-out hypersurface with the local phase-space density of particles given by the statistical distribution factors, (2) subsequent space-time evolution and decays of hadronic resonances in cascades, (3) calculation of the transverse-momentum spectra and numerous other observables related to the space-time evolution. The geometry of the freeze-out hypersurface and the collective velocity of expansion may be chosen from two successful models, the Cracow single-freeze-out model and the Blast-Wave model. All particles from the Particle Data Tables are used. The code is written in the object-oriented c++ language and complies to the standards of the ROOT environment. Program summaryProgram title:THERMINATOR Catalogue identifier:ADXL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland RAM required to execute with typical data:50 Mbytes Number of processors used:1 Computer(s) for which the program has been designed: PC, Pentium III, IV, or Athlon, 512 MB RAM not hardware dependent (any computer with the c++ compiler and the ROOT environment [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] Operating system(s) for which the program has been designed:Linux: Mandrake 9.0, Debian 3.0, SuSE 9.0, Red Hat FEDORA 3, etc., Windows XP with Cygwin ver. 1.5.13-1 and gcc ver. 3.3.3 (cygwin special)—not system dependent External routines/libraries used: ROOT ver. 4.02.00 Programming language:c++ Size of the package: (324 KB directory 40 KB compressed distribution archive), without the ROOT libraries (see http://root.cern.ch for details on the ROOT [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] requirements). The output files created by the code need 1.1 GB for each 500 events. Distribution format: tar gzip file Number of lines in distributed program, including test data, etc.: 6534 Number of bytes in ditribution program, including test data, etc.:41 828 Nature of the physical problem: Statistical models have proved to be very useful in the description of soft physics in relativistic heavy-ion collisions [P. Braun-Munzinger, K. Redlich, J. Stachel, 2003, nucl-th/0304013. [2

  12. CERN and 60 years of science for peace

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heuer, Rolf-Dieter, E-mail: Rolf.Heuer@cern.ch

    2015-02-24

    This paper presents CERN as it celebrates its 60{sup th} Anniversary since its founding. The presentation first discusses the mission of CERN and its role as an inter-governmental Organization. The paper also reviews aspects of the particle physics research programme, looking at both current and future accelerator-based facilities at the high-energy and intensity frontiers. Finally, the paper considers issues beyond fundamental research, such as capacity-building and the interface between Art and Science.

  13. Meeting Jentschke

    ScienceCinema

    None

    2018-05-18

    After an introduction about the latest research and news at CERN, the DG W. Jentschke speaks about future management of CERN with two new general managers, who will be in charge for the next 5 years: Dr. J.B. Adams who will focus on the administration of CERN and also the construction of buildings and equipment, and Dr. L. Van Hove who will be responsible for research activities. The DG speaks about expected changes, shared services, different divisions and their leaders, etc.

  14. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    Sen, Ashoke

    2017-12-18

    Part 7.The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  15. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    ScienceCinema

    None

    2018-02-09

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental InteractionS". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher

  16. CERN Winter School on Supergravity, Strings, and Gauge Theory 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-22

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher« less

  17. NA61/SHINE facility at the CERN SPS: beams and detector system

    NASA Astrophysics Data System (ADS)

    Abgrall, N.; Andreeva, O.; Aduszkiewicz, A.; Ali, Y.; Anticic, T.; Antoniou, N.; Baatar, B.; Bay, F.; Blondel, A.; Blumer, J.; Bogomilov, M.; Bogusz, M.; Bravar, A.; Brzychczyk, J.; Bunyatov, S. A.; Christakoglou, P.; Cirkovic, M.; Czopowicz, T.; Davis, N.; Debieux, S.; Dembinski, H.; Diakonos, F.; Di Luise, S.; Dominik, W.; Drozhzhova, T.; Dumarchez, J.; Dynowski, K.; Engel, R.; Efthymiopoulos, I.; Ereditato, A.; Fabich, A.; Feofilov, G. A.; Fodor, Z.; Fulop, A.; Gaździcki, M.; Golubeva, M.; Grebieszkow, K.; Grzeszczuk, A.; Guber, F.; Haesler, A.; Hasegawa, T.; Hierholzer, M.; Idczak, R.; Igolkin, S.; Ivashkin, A.; Jokovic, D.; Kadija, K.; Kapoyannis, A.; Kaptur, E.; Kielczewska, D.; Kirejczyk, M.; Kisiel, J.; Kiss, T.; Kleinfelder, S.; Kobayashi, T.; Kolesnikov, V. I.; Kolev, D.; Kondratiev, V. P.; Korzenev, A.; Koversarski, P.; Kowalski, S.; Krasnoperov, A.; Kurepin, A.; Larsen, D.; Laszlo, A.; Lyubushkin, V. V.; Maćkowiak-Pawłowska, M.; Majka, Z.; Maksiak, B.; Malakhov, A. I.; Maletic, D.; Manglunki, D.; Manic, D.; Marchionni, A.; Marcinek, A.; Marin, V.; Marton, K.; Mathes, H.-J.; Matulewicz, T.; Matveev, V.; Melkumov, G. L.; Messina, M.; Mrówczyński, St.; Murphy, S.; Nakadaira, T.; Nirkko, M.; Nishikawa, K.; Palczewski, T.; Palla, G.; Panagiotou, A. D.; Paul, T.; Peryt, W.; Petukhov, O.; Pistillo, C.; Płaneta, R.; Pluta, J.; Popov, B. A.; Posiadala, M.; Puławski, S.; Puzovic, J.; Rauch, W.; Ravonel, M.; Redij, A.; Renfordt, R.; Richter-Was, E.; Robert, A.; Röhrich, D.; Rondio, E.; Rossi, B.; Roth, M.; Rubbia, A.; Rustamov, A.; Rybczyński, M.; Sadovsky, A.; Sakashita, K.; Savic, M.; Schmidt, K.; Sekiguchi, T.; Seyboth, P.; Sgalaberna, D.; Shibata, M.; Sipos, R.; Skrzypczak, E.; Słodkowski, M.; Sosin, Z.; Staszel, P.; Stefanek, G.; Stepaniak, J.; Stroebele, H.; Susa, T.; Szuba, M.; Tada, M.; Tereshchenko, V.; Tolyhi, T.; Tsenov, R.; Turko, L.; Ulrich, R.; Unger, M.; Vassiliou, M.; Veberic, D.; Vechernin, V. V.; Vesztergombi, G.; Vinogradov, L.; Wilczek, A.; Włodarczyk, Z.; Wojtaszek-Szwarz, A.; Wyszyński, O.; Zambelli, L.; Zipper, W.

    2014-06-01

    NA61/SHINE (SPS Heavy Ion and Neutrino Experiment) is a multi-purpose experimental facility to study hadron production in hadron-proton, hadron-nucleus and nucleus-nucleus collisions at the CERN Super Proton Synchrotron. It recorded the first physics data with hadron beams in 2009 and with ion beams (secondary 7Be beams) in 2011. NA61/SHINE has greatly profited from the long development of the CERN proton and ion sources and the accelerator chain as well as the H2 beamline of the CERN North Area. The latter has recently been modified to also serve as a fragment separator as needed to produce the Be beams for NA61/SHINE. Numerous components of the NA61/SHINE set-up were inherited from its predecessors, in particular, the last one, the NA49 experiment. Important new detectors and upgrades of the legacy equipment were introduced by the NA61/SHINE Collaboration. This paper describes the state of the NA61/SHINE facility — the beams and the detector system — before the CERN Long Shutdown I, which started in March 2013.

  18. Computing the qg → qg cross section using the BCFW recursion and introduction to jet tomography in heavy ion collisions via MHV techniques

    NASA Astrophysics Data System (ADS)

    Rabemananajara, Tanjona R.; Horowitz, W. A.

    2017-09-01

    To make predictions for the particle physics processes, one has to compute the cross section of the specific process as this is what one can measure in a modern collider experiment such as the Large Hadron Collider (LHC) at CERN. Theoretically, it has been proven to be extremely difficult to compute scattering amplitudes using conventional methods of Feynman. Calculations with Feynman diagrams are realizations of a perturbative expansion and when doing calculations one has to set up all topologically different diagrams, for a given process up to a given order of coupling in the theory. This quickly makes the calculation of scattering amplitudes a hot mess. Fortunately, one can simplify calculations by considering the helicity amplitude for the Maximally Helicity Violating (MHV). This can be extended to the formalism of on-shell recursion, which is able to derive, in a much simpler way the expression of a high order scattering amplitude from lower orders.

  19. Graphical processors for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-02-01

    General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network Constituents, Fundamental Forces and Symmetries of the Universe. The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva.« less

  1. Gamma Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    2000-01-01

    The project has progressed successfully during this period of performance. The highlights of the Gamma Ray Astronomy teams efforts are: (1) Support daily BATSE data operations, including receipt, archival and dissemination of data, quick-look science analysis, rapid gamma-ray burst and transient monitoring and response efforts, instrument state-of-health monitoring, and instrument commanding and configuration; (2) On-going scientific analysis, including production and maintenance of gamma-ray burst, pulsed source and occultation source catalogs, gamma-ray burst spectroscopy, studies of the properties of pulsars and black holes, and long-term monitoring of hard x-ray sources; (3) Maintenance and continuous improvement of BATSE instrument response and calibration data bases; (4) Investigation of the use of solid state detectors for eventual application and instrument to perform all sky monitoring of X-Ray and Gamma sources with high sensitivity; and (5) Support of BATSE outreach activities, including seminars, colloquia and World Wide Web pages. The highlights of this efforts can be summarized in the publications and presentation list.

  2. A visiting scientist program for the burst and transient source experiment

    NASA Technical Reports Server (NTRS)

    Kerr, Frank J.

    1995-01-01

    During this project, Universities Space Research Association provided program management and the administration for overseeing the performance of the total contractual effort. The program director and administrative staff provided the expertise and experience needed to efficiently manage the program.USRA provided a program coordinator and v visiting scientists to perform scientific research with Burst and Transient Source Experiment (BATSE) data. This research was associated with the primary scientific objectives of BATSE and with the various BATSE collaborations which were formed in response to the Compton Gamma Ray Observatory Guest Investigator Program. USRA provided administration for workshops, colloquia, the preparation of scientific documentation, etc. and also provided flexible program support in order to meet the on-going needs of MSFC's BATSE program. USRA performed tasks associated with the recovery, archiving, and processing of scientific data from BATSE. A bibliography of research in the astrophysics discipline is attached as Appendix 1. Visiting Scientists and Research Associates performed activities on this project, and their technical reports are attached as Appendix 2.

  3. CERN Collider, France-Switzerland

    NASA Image and Video Library

    2013-08-23

    This image, acquired by NASA Terra spacecraft, is of the CERN Large Hadron Collider, the world largest and highest-energy particle accelerator laying beneath the French-Swiss border northwest of Geneva yellow circle.

  4. CERN: A European laboratory for a global project

    NASA Astrophysics Data System (ADS)

    Voss, Rüdiger

    2015-06-01

    In the most important shift of paradigm of its membership rules in 60 years, CERN in 2010 introduced a policy of “Geographical Enlargement” which for the first time opened the door for membership of non-European States in the Organization. This short article reviews briefly the history of CERN's membership rules, discusses the rationale behind the new policy, its relationship with the emerging global roadmap of particle physics, and gives a short overview of the status of the enlargement process.

  5. Review of CERN Data Centre Infrastructure

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Bell, T.; van Eldik, J.; McCance, G.; Panzer-Steindel, B.; Coelho dos Santos, M.; Traylen and, S.; Schwickerath, U.

    2012-12-01

    The CERN Data Centre is reviewing strategies for optimizing the use of the existing infrastructure and expanding to a new data centre by studying how other large sites are being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds and fabric management in order to reduce operational effort, increase agility and support unattended remote data centres. This paper gives the details on the project's motivations, current status and areas for future investigation.

  6. PARTICLE PHYSICS: CERN Gives Higgs Hunters Extra Month to Collect Data.

    PubMed

    Morton, O

    2000-09-22

    After 11 years of banging electrons and positrons together at higher energies than any other machine in the world, CERN, the European laboratory for particle physics, had decided to shut down the Large Electron-Positron collider (LEP) and install a new machine, the Large Hadron Collider (LHC), in its 27-kilometer tunnel. In 2005, the LHC will start bashing protons together at even higher energies. But tantalizing hints of a long-sought fundamental particle have forced CERN managers to grant LEP a month's reprieve.

  7. Réunion publique HR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-04-30

    Chers Collègues,Je me permets de vous rappeler qu'une réunion publique organisée par le Département HR se tiendra aujourd'hui:Vendredi 30 avril 2010 à 9h30 dans l'Amphithéâtre principal (café offert dès 9h00).Durant cette réunion, des informations générales seront données sur:le CERN Admin e-guide, qui est un nouveau guide des procédures administratives du CERN ayant pour but de faciliter la recherche d'informations pratiques et d'offrir un format de lecture convivial;le régime d'Assurance Maladie de l'Organisation (présentation effectuée par Philippe Charpentier, Président du CHIS Board) et;la Caisse de Pensions (présentation effectuée par Théodore Economou, Administrateur de la Caisse de Pensions du CERN).Une transmission simultanéemore » de cette réunion sera assurée dans l'Amphithéâtre BE de Prévessin et également disponible à l'adresse suivante: http://webcast.cern.chJe me réjouis de votre participation!Meilleures salutations,Anne-Sylvie CatherinChef du Département des Ressources humaines__________________________________________________________________________________Dear Colleagues,I should like to remind you that a plublic meeting organised by HR Department will be held today:Friday 30 April 2010 at 9:30 am in the Main Auditorium (coffee from 9:00 am).During this meeting, general information will be given about:the CERN Admin e-guide which is a new guide to the Organization's administrative procedures, drawn up to facilitate the retrieval of practical information and to offer a user-friendly format;the CERN Health Insurance System (presentation by Philippe Charpentier, President of the CHIS Board) and;the Pension Fund (presentation by Theodore Economou, Administrator of the CERN Pension Fund).A simultaneous transmission of this meeting will be broadcast in the BE Auditorium at Prévessin and will also be available at the following address. http://webcast.cern.chI look forward to your participation!Best regards,Anne-Sylvie CatherinHead, Human Resources Department« less

  8. Réunion publique HR

    ScienceCinema

    None

    2017-12-09

    Chers Collègues,Je me permets de vous rappeler qu'une réunion publique organisée par le Département HR se tiendra aujourd'hui:Vendredi 30 avril 2010 à 9h30 dans l'Amphithéâtre principal (café offert dès 9h00).Durant cette réunion, des informations générales seront données sur:le CERN Admin e-guide, qui est un nouveau guide des procédures administratives du CERN ayant pour but de faciliter la recherche d'informations pratiques et d'offrir un format de lecture convivial;le régime d'Assurance Maladie de l'Organisation (présentation effectuée par Philippe Charpentier, Président du CHIS Board) et;la Caisse de Pensions (présentation effectuée par Théodore Economou, Administrateur de la Caisse de Pensions du CERN).Une transmission simultanée de cette réunion sera assurée dans l'Amphithéâtre BE de Prévessin et également disponible à l'adresse suivante: http://webcast.cern.chJe me réjouis de votre participation!Meilleures salutations,Anne-Sylvie CatherinChef du Département des Ressources humaines__________________________________________________________________________________Dear Colleagues,I should like to remind you that a plublic meeting organised by HR Department will be held today:Friday 30 April 2010 at 9:30 am in the Main Auditorium (coffee from 9:00 am).During this meeting, general information will be given about:the CERN Admin e-guide which is a new guide to the Organization's administrative procedures, drawn up to facilitate the retrieval of practical information and to offer a user-friendly format;the CERN Health Insurance System (presentation by Philippe Charpentier, President of the CHIS Board) and;the Pension Fund (presentation by Theodore Economou, Administrator of the CERN Pension Fund).A simultaneous transmission of this meeting will be broadcast in the BE Auditorium at Prévessin and will also be available at the following address. http://webcast.cern.chI look forward to your participation!Best regards,Anne-Sylvie CatherinHead, Human Resources Department

  9. CERN launches high-school internship programme

    NASA Astrophysics Data System (ADS)

    Johnston, Hamish

    2017-07-01

    The CERN particle-physics lab has hosted 22 high-school students from Hungary in a pilot programme designed to show teenagers how science, technology, engineering and mathematics is used at the particle-physics lab.

  10. A program for the Bayesian Neural Network in the ROOT framework

    NASA Astrophysics Data System (ADS)

    Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang

    2011-12-01

    We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.

  11. OBITUARY: Maurice Jacob (1933 2007)

    NASA Astrophysics Data System (ADS)

    Quercigh, Emanuele; Šándor, Ladislav

    2008-04-01

    Maurice Jacob passed away on 2 May 2007. With his death, we have lost one of the founding fathers of the ultra-relativistic heavy ion programme. His interest in high-energy nuclear physics started in 1981 when alpha alpha collisions could first be studied in the CERN ISR. An enthusiastic supporter of ion beam experiments at CERN, Maurice was at the origin of the 1982 Quark Matter meeting in Bielefeld [1] which brought together more than 100 participants from both sides of the Atlantic, showing a good enthusiastic constituency for such research. There were twice as many the following year at Brookhaven. Finally in the mid-eighties, a heavy ion programme was approved both at CERN and at Brookhaven involving as many nuclear as particle physicists. It was the start of a fruitful interdisciplinary collaboration which is nowadays continuing both at RHIC and at LHC. Maurice followed actively the development of this field, reporting at a number of conferences and meetings (Les Arcs, Bielefeld, Beijing, Brookhaven, Lenox, Singapore, Taormina,...). This activity culminated in 2000, when Maurice, together with Ulrich Heinz, summarized the main results of the CERN SPS heavy-ion experiments and the evidence was obtained for a new state of matter [2]. Maurice was a brilliant theoretical physicist. His many contributions have been summarized in a recent article in the CERN Courier by two leading CERN theorists, John Ellis and Andre Martin [3]. The following is an excerpt from their article: `He began his research career at Saclay and, while still a PhD student, he continued brilliantly during a stay at Brookhaven. It was there in 1959 that Maurice, together with Giancarlo Wick, developed the helicity amplitude formalism that is the basis of many modern theoretical calculations. Maurice obtained his PhD in 1961 and, after a stay at Caltech, returned to Saclay. A second American foray was to SLAC, where he and Sam Berman made the crucial observation that the point-like structures (partons) seen in deep-inelastic scattering implied the existence of high-transverse-momentum processes in proton proton collisions, as the ISR at CERN subsequently discovered. In 1967 Maurice joined CERN, where he remained, apart from influential visits to Yale, Fermilab and elsewhere, until his retirement in 1998. He became one of the most respected international experts on the phenomenology of strong interactions, including diffraction, scaling, high-transverse-momentum processes and the formation of quark gluon plasma. In particular, he pioneered the studies of inclusive hadron-production processes, including scaling and its violations. Also, working with Ron Horgan, he made detailed predictions for the production of jets at CERN's proton antiproton collider. The UA2 and UA1 experiments subsequently discovered these. He was also interested in electron positron colliders, making pioneering calculations, together with Tai Wu, of radiation in high-energy collisions. Maurice was one of the scientific pillars of CERN, working closely with experimental colleagues in predicting and interpreting results from successive CERN colliders. He was indefatigable in organizing regular meetings on ISR physics, bringing together theorists and experimentalists to debate the meaning of new results and propose new measurements. He was one of the strongest advocates of Carlo Rubbia's proposal for a proton antiproton collider at CERN, and was influential in preparing and advertising its physics. In 1978 he organized the Les Houches workshop that brought the LEP project to the attention of the wider European particle physics community. He also organized the ECFA workshop at Lausanne in 1984 that made the first exploration of the possible physics of the LHC. It is a tragedy that Maurice has not lived to enjoy data from the LHC.' References [1] Maurice Jacob and Helmut Satz (eds) 1982 Proc. Workshop on Quark Matter Formation and Heavy Ion Collisions, Bielefeld, 10 14 May 1982 (Singapore: World Scientific Publishing) [2] Heinz Ulrich W and Jacob Maurice 2000 Evidence for a new state of matter: An assessment of the results from the CERN lead beam program. Preprint nucl-th/0002042 [3] Ellis J and Martin A 2007 CERN Courier 47 issue 6

  12. Lectures from the European RTN Winter School on Strings, Supergravity and Gauge Fields, CERN, 15 19 January 2007

    NASA Astrophysics Data System (ADS)

    Derendinger, J.-P.; Scrucca, C. A.; Uranga, A.

    2007-11-01

    This special issue is devoted to the proceedings of the conference 'Winter School on Strings, Supergravity and Gauge Theories', which took place at CERN, the European Centre for Nuclear Research, in Geneva, Switzerland, from the 15 to the 19 of January 2007. This event was organized in the framework of the European Mobility Research and Training Network entitled 'Constituents, Fundamental Forces and Symmetries of the Universe'. It is part of a yearly series of scientific schools, which represents what is by now a well established tradition. The previous conferences have been held at SISSA, in Trieste, Italy, in February 2005 and at CERN in January 2006. The next will again take place at CERN, in January 2008. The school was primarily meant for young doctoral students and postdoctoral researchers working in the area of string theory. It consisted of several general lectures of four hours each, the notes of which are published in the present proceedings, and seven working group discussion sessions, focused on specific topics of the network research program. It was attended by approximatively 250 participants. The topics of the lectures were chosen to provide an introduction to some of the areas of recent progress, and to the open problems, in string theory. String theory is a compelling candidate for a theory of all interactions. A basic challenge in this field is therefore to explore the connection of string theory models and the laws of physics in different realms, like high-energy particle physics, early cosmology, or physics of strongly coupled gauge theories. Concerning the exploration of string theory compactifications leading to realistic models of particle physics, one of the main obstacles in this direction is the proper understanding of supersymmetry breaking. The lecture notes by Nathan Seiberg review the realization of spontaneous breaking of supersymmetry in field theory, including recent developments via the use of meta-stable long-lived vacua. It is possible that such an understanding proves crucial in the realization of supersymmetry breaking in string theory. A second long-standing obstacle, which is being tackled with recent techniques, is moduli stabilization, namely the removal of unwanted massless scalar fields from string models. The present status of this problem, and its prospects of solution via the introduction of general sets of fluxes in the compactification space, were covered in the lectures by Brian Wecht. Application of these ideas to connect string theory to particle physics will require a good understanding of the experimental situation at the forthcoming collider LHC at CERN, and the detection tools for signals of new physics, as reviewed in the lectures by Joe Lykken (not covered in the present issue). Along a different line, the role of moduli fields in string theory is expected to provide a natural explanation of models of inflation, and thus of the origin of the cosmological evolution of our universe. The lecture notes by Cliff Burgess provide a review of big bang cosmology, inflation, and its possible explanation in terms of string theory constructions, including some of the most recent results in the field (these notes also appear in the proceedings of two other schools held in the same period). A surprising recent application of string theory is the description, via the ideas of holography and duality between string theories and gauge theories, of physical properties of quantum chromodynamics at high temperature. Indeed experimental data on the physical properties of the quark gluon plasma, produced in heavy ion collision at the RHIC experiment in Brookhaven (and soon at the LHC at CERN) can be recovered, at a semi-quantitative level, from computations in a string theory dual of the system. These applications are reviewed in the lectures by David Mateos. The conference was financially supported by the European Commission under contract MRTN-CT-2004-005104 and by CERN. It was jointly organized by the Physics Institute of the University of Neuchâtel and the Theory Unit of the Physics Division of CERN. It is a great pleasure for us to warmly thank the Theory Unit of CERN for its very kind hospitality and for the high quality of the assistance and the infrastructures that it has provided. We also acknowledge helpful administrative assistance from the Physics Institute of the University of Neuchâtel. A special acknowledgement also goes to Denis Frank, for his very valuable help in preparing the conference web pages. Group photo

  13. CERN automatic audio-conference service

    NASA Astrophysics Data System (ADS)

    Sierra Moral, Rodrigo

    2010-04-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  14. Measurements and FLUKA simulations of bismuth and aluminium activation at the CERN Shielding Benchmark Facility (CSBF)

    NASA Astrophysics Data System (ADS)

    Iliopoulou, E.; Bamidis, P.; Brugger, M.; Froeschl, R.; Infantino, A.; Kajimoto, T.; Nakao, N.; Roesler, S.; Sanami, T.; Siountas, A.

    2018-03-01

    The CERN High Energy AcceleRator Mixed field facility (CHARM) is located in the CERN Proton Synchrotron (PS) East Experimental Area. The facility receives a pulsed proton beam from the CERN PS with a beam momentum of 24 GeV/c with 5 ṡ1011 protons per pulse with a pulse length of 350 ms and with a maximum average beam intensity of 6.7 ṡ1010 p/s that then impacts on the CHARM target. The shielding of the CHARM facility also includes the CERN Shielding Benchmark Facility (CSBF) situated laterally above the target. This facility consists of 80 cm of cast iron and 360 cm of concrete with barite concrete in some places. Activation samples of bismuth and aluminium were placed in the CSBF and in the CHARM access corridor in July 2015. Monte Carlo simulations with the FLUKA code have been performed to estimate the specific production yields for these samples. The results estimated by FLUKA Monte Carlo simulations are compared to activation measurements of these samples. The comparison between FLUKA simulations and the measured values from γ-spectrometry gives an agreement better than a factor of 2.

  15. Memorial W.Gentner

    ScienceCinema

    None

    2018-05-25

    The DG H. Schopper gives an introduction for the commemoration and ceremony of the life and work of Professor Wolfgang Gentner. W. Gentner, German physicist, born in 1906 in Frankfurt and died in September 1980 in Heidelberg, was director of CERN from 1955 to 1960, president of the Scientific Policy Committee from 1968 to 1971 and president of the Council of CERN from 1972 to 1974. He was one of the founders of CERN and four people who knew him well pay tribute to him, among others one of his students, as well as J.B. Adams and O. Sheffard.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The DG H. Schopper gives an introduction for the commemoration and ceremony of the life and work of Professor Wolfgang Gentner. W. Gentner, German physicist, born in 1906 in Frankfurt and died in September 1980 in Heidelberg, was director of CERN from 1955 to 1960, president of the Scientific Policy Committee from 1968 to 1971 and president of the Council of CERN from 1972 to 1974. He was one of the founders of CERN and four people who knew him well pay tribute to him, among others one of his students, as well as J.B. Adams and O. Sheffard.

  17. OPERA - First Beam Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, M.

    2008-02-21

    OPERA is a long base-line neutrino oscillation experiment to detect tau-neutrino appearance and to prove that the origin of the atmospheric muon neutrino deficit observed by Kamiokande is the neutrino oscillation. A Hybrid emulsion detector, of which weight is about 1.3 kton, has been installed in Gran Sasso laboratory. New muon neutrino beam line, CNGS, has been constructed at CERN to send neutrinos to Gran Sasso, 730 km apart from CERN. In 2006, first neutrinos were sent from CERN to LNGS and were detected by the OPERA detector successfully as planned.

  18. Lectures from the European RTN Winter School on Strings, Supergravity and Gauge Theories, CERN, 16 20 January, 2006

    NASA Astrophysics Data System (ADS)

    Derendinger, J.-P.; Scrucca, C. A.; Uranga, A. M.

    2006-11-01

    This special issue is devoted to the proceedings of the conference 'Winter School on Strings, Supergravity and Gauge Theories', which took place at CERN, the European Centre for Nuclear Research, in Geneva, Switzerland, from the 16 to the 20 of January 2006. This event was organized in the framework of the European Mobility Research and Training Network entitled 'Constituents, Fundamental Forces and Symmetries of the Universe'. It is part of a yearly series of scientific schools which have become a traditional rendezvous for young researchers of the community. The previous one was held at SISSA, in Trieste, Italy, in February 2005, and the next one will take place again at CERN, in January 2007. The school was primarily meant for young doctoral students and postdoctoral researchers working in the area of string theory. It consisted of five general lectures of four hours each, whose notes are published in the present proceedings, and five working group discussion sessions, focused on specific topics of the network research program. It was attended by approximately 250 participants. The topics of the lectures were chosen to provide an introduction to some of the areas of recent progress and to the open problems in string theory. String theory is expected to provide insights into the description of systems where the role of gravity is crucial. One prominent example of such systems are time-dependent backgrounds with big bang singularities, whose status in string theory is reviewed in the lecture notes by Ben Craps. In another main problem in quantum gravity, string theory gives a fascinating microscopic description of black holes and their properties. The lectures by Shiraz Minwalla review the thermal properties of black holes from their microscopic description in terms of a holographically dual large N field theory. Progress in the description of black hole microstates, and its interplay with the macroscopic description in terms of supergravity solutions via the attractor mechanism, are covered by the lectures by Atish Dabholkar and Boris Pioline. A final important mainstream topic in string theory, being a higher-dimensional theory, is its compactification to four dimensions, and the computation of four-dimensional physical properties in terms of the properties of the internal space. The lectures by Mariana Graña review recent progress in the classification of the most general supersymmetric backgrounds describing the compactified dimensions, and their role in determining the number of massless scalar moduli fields in four dimensions. The conference was financially supported by the European Commission under contract MRTN-CT-2004-005104 and by CERN. It was jointly organized by the Physics Institute of the University of Neuchâtel and the Theory Unit of the Physics Division of CERN. It is a great pleasure for us to warmly thank the Theory Unit of CERN for its very kind hospitality and for the high quality of the services and infrastructure that it has provided. We also acknowledge helpful administrative assistance from the Physics Institute of the University of Neuchâtel. Special thanks go finally to Denis Frank for his very valuable help in preparing the conference web pages, and to J Rostant, A-M Perrin and M-S Vascotto for their continuous and very reliable assistance.

  19. Membership Finland

    ScienceCinema

    None

    2018-05-18

    The DG C. Rubbia and the vice president of the council of CERN gives a warm welcome to the membership of Finland, as the 15th member of CERN since January 1 1991 in the presence of the Secretary-General and the ambassador.

  20. Visit CD

    ScienceCinema

    None

    2017-12-09

    Le DG H.Schopper souhaite la bienvenue aux ambassadeurs des pays membres et aux représentants des pays avec lesquels le Cern entretient des relations proches et fait un exposé sur les activités au Cern

  1. Terbium Radionuclides for Theranostics Applications: A Focus On MEDICIS-PROMED

    NASA Astrophysics Data System (ADS)

    Cavaier, R. Formento; Haddad, F.; Sounalet, T.; Stora, T.; Zahi, I.

    A new facility, named CERN-MEDICIS, is under construction at CERN to produce radionuclides for medical applications. In parallel, the MEDICIS-PROMED, a Marie Sklodowska-Curie innovative training network of the Horizon 2020 European Commission's program, is being coordinated by CERN to train young scientists on the production and use of innovative radionuclides and develop a network of experts within Europe. One program within MEDICIS-PROMED is to determine the feasibility of producing innovative radioisotopes for theranostics using a commercial middle-sized high-current cyclotron and the mass separation technology developed at CERN-MEDICIS. This will allow the production of high specific activity radioisotopes not achievable with the common post-processing by chemical separation. Radioisotopes of scandium, copper, arsenic and terbium have been identified. Preliminary studies of activation yield and irradiation parameters optimization for the production of Tb-149 will be described.

  2. Cryogenic Control System Migration and Developments towards the UNICOS CERN Standard at INFN

    NASA Astrophysics Data System (ADS)

    Modanese, Paolo; Calore, Andrea; Contran, Tiziano; Friso, Alessandro; Pengo, Marco; Canella, Stefania; Burioli, Sergio; Gallese, Benedetto; Inglese, Vitaliano; Pezzetti, Marco; Pengo, Ruggero

    The cryogenic control systems at Laboratori Nazionali di Legnaro (LNL) are undergoing an important and radical modernization, allowing all the plants controls and supervision systems to be renewed in a homogeneous way towards the CERN-UNICOS standard. Before the UNICOS migration project started there were as many as 7 different types of PLC and 7 different types of SCADA, each one requiring its own particular programming language. In these conditions, even a simple modification and/or integration on the program or on the supervision, required the intervention of a system integrator company, specialized in its specific control system. Furthermore it implied that the operators have to be trained to learn the different types of control systems. The CERN-UNICOS invented for LHC [1] has been chosen due to its reliability and planned to run and be maintained for decades on. The complete migration is part of an agreement between CERN and INFN.

  3. CMS Centres Worldwide - a New Collaborative Infrastructure

    NASA Astrophysics Data System (ADS)

    Taylor, Lucas

    2011-12-01

    The CMS Experiment at the LHC has established a network of more than fifty inter-connected "CMS Centres" at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running "telepresence" video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  4. PREFACE: Lectures from the CERN Winter School on Strings, Supergravity and Gauge Theories, CERN, 9-13 February 2009 Lectures from the CERN Winter School on Strings, Supergravity and Gauge Theories, CERN, 9-13 February 2009

    NASA Astrophysics Data System (ADS)

    Uranga, A. M.

    2009-11-01

    This special section is devoted to the proceedings of the conference `Winter School on Strings, Supergravity and Gauge Theories', which took place at CERN, the European Centre for Nuclear Research, in Geneva, Switzerland 9-13 February 2009. This event is part of a yearly series of scientific schools, which represents a well established tradition. Previous events have been held at SISSA, in Trieste, Italy, in February 2005 and at CERN in January 2006, January 2007 and January 2008, and were funded by the European Mobility Research and Training Network `Constituents, Fundamental Forces and Symmetries of the Universe'. The next event will take place again at CERN, in January 2010. The school was primarily meant for young doctoral students and postdoctoral researchers working in the area of string theory. It consisted of several general lectures of four hours each, whose notes are published in this special section, and six working group discussion sessions, focused on specific topics of the network research program. It was well attended by over 200 participants. The topics of the lectures were chosen to provide an introduction to some of the areas of recent progress, and to the open problems, in string theory. One of the most active areas in string theory in recent years has been the AdS/CFT or gauge/gravity correspondence, which proposes the complete equivalence of string theory on (asymptotically) anti de Sitter spacetimes with certain quantum (gauge) field theories. The duality has recently been applied to understanding the hydrodynamical properties of a hot plasma in gauge theories (like the quark-gluon plasma created in heavy ion collisions at the RHIC experiment at Brookhaven, and soon at the LHC at CERN) in terms of a dual gravitational AdS theory in the presence of a black hole. These developments were reviewed in the lecture notes by M Rangamani. In addition, the AdS/CFT duality has been proposed as a tool to study interesting physical properties in other physical systems described by quantum field theory, for instance in the context of a condensed matter system. The lectures by S Hartnoll provided an introduction to this recent development with an emphasis on the dual holographic description of superconductivity. Finally, ideas inspired by the AdS/CFT correspondence are yielding deep insights into fundamental questions of quantum gravity, like the entropy of black holes and its interpretation in terms of microstates. The lectures by S Mathur reviewed the black hole entropy and information paradox, and the proposal for its resolution in terms of `fuzzball' microstates. Further sets of lectures, not included in this special section, by F Zwirner and V Mukhanov, covered phenomenological aspects of high energy physics beyond the Standard Model and of cosmology. The coming experimental data in these two fields are expected to foster new developments in connecting string theory to the real world. The conference was financially supported by CERN and partially by the Arnold Sommerfeld Center for Theoretical Physics of the Ludwig Maximilians University of Munich. It is a great pleasure for us to warmly thank the Theory Unit of CERN for its very kind hospitality and for the high quality of the assistance and the infrastructures that it has provided. A M Uranga CERN, Switzerland Guest Editor

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performancemore » of the event-building system.« less

  6. Evolution of user analysis on the grid in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  7. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  8. Incoherent vector mesons production in PbPb ultraperipheral collisions at the LHC

    NASA Astrophysics Data System (ADS)

    Xie, Ya-Ping; Chen, Xurong

    2017-03-01

    The incoherent rapidity distributions of vector mesons are computed in dipole model in PbPb ultraperipheral collisions at the CERN Large Hadron Collider (LHC). The IIM model fitted from newer data is employed in the dipole amplitude. The Boosted Gaussian and Gaus-LC wave functions for vector mesons are implemented in the calculations as well. Predictions for the J / ψ, ψ (2 s), ρ and ϕ incoherent rapidity distributions are evaluated and compared with experimental data and other theoretical predictions in this paper. We obtain closer predictions of the incoherent rapidity distributions for J / ψ than previous calculations in the IIM model.

  9. Trigger and data acquisition system for the N- N experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldo-Ceolin, M.; Bobisut, F.; Bonaiti, V.

    1991-04-01

    In this paper the Trigger and Data Acquisition system of the N-{bar N} experiment at the Institute Laue-Langevin at Grenoble is presented, together with CAMAC modules especially designed for this experiment. The trigger system is organized on three logical levels; it works in the presence of a high level of beam induced noise, without beam pulse synchronization, looking for a very rare signal. The data acquisition is based on a MicroVax II computer, in a cluster with 4 VaxStations, the DAQP software developed at CERN. The system has been working for a year with high efficiency and reliability.

  10. The ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Deloose, I.; Pace, A.

    1994-12-01

    The two CERN isotope separators named ISOLDE have been running on the new Personal Computer (PC) based control system since April 1992. The new architecture that makes heavy use of the commercial software and hardware of the PC market has been implemented on the 1700 geographically distributed control channels of the two separators and their experimental area. Eleven MSDOS Intel-based PCs with approximately 80 acquisition and control boards are used to access the equipment and are controlled from three PCs running Microsoft Windows used as consoles through a Novell Local Area Network. This paper describes the interesting solutions found and discusses the reduced programming workload and costs that have been obtained.

  11. Exclusive photoproduction of vector mesons in proton-lead ultraperipheral collisions at the LHC

    NASA Astrophysics Data System (ADS)

    Xie, Ya-Ping; Chen, Xurong

    2018-02-01

    Rapidity distributions of vector mesons are computed in dipole model proton-lead ultraperipheral collisions (UPCs) at the CERN Larger Hadron Collider (LHC). The dipole model framework is implemented in the calculations of cross sections in the photon-hadron interaction. The bCGC model and Boosted Gaussian wave functions are employed in the scattering amplitude. We obtain predictions of rapidity distributions of J / ψ meson proton-lead ultraperipheral collisions. The predictions give a good description to the experimental data of ALICE. The rapidity distributions of ϕ, ω and ψ (2 s) mesons in proton-lead ultraperipheral collisions are also presented in this paper.

  12. 24-channel dual microcontroller-based voltage controller for ion optics remote control

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.

    2018-05-01

    The design of a 24-channel voltage control instrument for Wenzel Elektronik N1130 NIM modules is described. This instrument is remote controlled from a LabVIEW GUI on a host Windows computer and is intended for ion optics control in electron affinity measurements on negative ions at the CERN-ISOLDE facility. Each channel has a resolution of 12 bits and has a normally distributed noise with a standard deviation of <1 mV. The instrument is designed as a standard 2-unit NIM module where the electronic hardware consists of a printed circuit board with two asynchronously operating microcontrollers.

  13. Offering Global Collaboration Services beyond CERN and HEP

    NASA Astrophysics Data System (ADS)

    Fernandes, J.; Ferreira, P.; Baron, T.

    2015-12-01

    The CERN IT department has built over the years a performant and integrated ecosystem of collaboration tools, from videoconference and webcast services to event management software. These services have been designed and evolved in very close collaboration with the various communities surrounding the laboratory and have been massively adopted by CERN users. To cope with this very heavy usage, global infrastructures have been deployed which take full advantage of CERN's international and global nature. If these services and tools are instrumental in enabling the worldwide collaboration which generates major HEP breakthroughs, they would certainly also benefit other sectors of science in which globalization has already taken place. Some of these services are driven by commercial software (Vidyo or Wowza for example), some others have been developed internally and have already been made available to the world as Open Source Software in line with CERN's spirit and mission. Indico for example is now installed in 100+ institutes worldwide. But providing the software is often not enough and institutes, collaborations and project teams do not always possess the expertise, or human or material resources that are needed to set up and maintain such services. Regional and national institutions have to answer needs, which are growingly global and often contradict their operational capabilities or organizational mandate and so are looking at existing worldwide service offers such as CERN's. We believe that the accumulated experience obtained through the operation of a large scale worldwide collaboration service combined with CERN's global network and its recently- deployed Agile Infrastructure would allow the Organization to set up and operate collaborative services, such as Indico and Vidyo, at a much larger scale and on behalf of worldwide research and education institutions and thus answer these pressing demands while optimizing resources at a global level. Such services would be built over a robust and massively scalable Indico server to which the concept of communities would be added, and which would then serve as a hub for accessing other collaboration services such as Vidyo, on the same simple and successful model currently in place for CERN users. This talk will describe this vision, its benefits and the steps that have already been taken to make it come to life.

  14. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  15. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  16. Graphics Processors in HEP Low-Level Trigger Systems

    NASA Astrophysics Data System (ADS)

    Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Cotta Ramusino, Angelo; Cretaro, Paolo; Di Lorenzo, Stefano; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero

    2016-11-01

    Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup.

  17. Exploiting Analytics Techniques in CMS Computing Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less

  18. The ATLAS Experiment at the CERN Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    ATLAS Collaboration; Aad, G.; Abat, E.; Abdallah, J.; Abdelalim, A. A.; Abdesselam, A.; Abdinov, O.; Abi, B. A.; Abolins, M.; Abramowicz, H.; Acerbi, E.; Acharya, B. S.; Achenbach, R.; Ackers, M.; Adams, D. L.; Adamyan, F.; Addy, T. N.; Aderholz, M.; Adorisio, C.; Adragna, P.; Aharrouche, M.; Ahlen, S. P.; Ahles, F.; Ahmad, A.; Ahmed, H.; Aielli, G.; Åkesson, P. F.; Åkesson, T. P. A.; Akimov, A. V.; Alam, S. M.; Albert, J.; Albrand, S.; Aleksa, M.; Aleksandrov, I. N.; Aleppo, M.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alimonti, G.; Aliyev, M.; Allport, P. P.; Allwood-Spiers, S. E.; Aloisio, A.; Alonso, J.; Alves, R.; Alviggi, M. G.; Amako, K.; Amaral, P.; Amaral, S. P.; Ambrosini, G.; Ambrosio, G.; Amelung, C.; Ammosov, V. V.; Amorim, A.; Amram, N.; Anastopoulos, C.; Anderson, B.; Anderson, K. J.; Anderssen, E. C.; Andreazza, A.; Andrei, V.; Andricek, L.; Andrieux, M.-L.; Anduaga, X. S.; Anghinolfi, F.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Apsimon, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Archambault, J. P.; Arguin, J.-F.; Arik, E.; Arik, M.; Arms, K. E.; Armstrong, S. R.; Arnaud, M.; Arnault, C.; Artamonov, A.; Asai, S.; Ask, S.; Åsman, B.; Asner, D.; Asquith, L.; Assamagan, K.; Astbury, A.; Athar, B.; Atkinson, T.; Aubert, B.; Auerbach, B.; Auge, E.; Augsten, K.; Aulchenko, V. M.; Austin, N.; Avolio, G.; Avramidou, R.; Axen, A.; Ay, C.; Azuelos, G.; Baccaglioni, G.; Bacci, C.; Bachacou, H.; Bachas, K.; Bachy, G.; Badescu, E.; Bagnaia, P.; Bailey, D. C.; Baines, J. T.; Baker, O. K.; Ballester, F.; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banfi, D.; Bangert, A.; Bansal, V.; Baranov, S. P.; Baranov, S.; Barashkou, A.; Barberio, E. L.; Barberis, D.; Barbier, G.; Barclay, P.; Bardin, D. Y.; Bargassa, P.; Barillari, T.; Barisonzi, M.; Barnett, B. M.; Barnett, R. M.; Baron, S.; Baroncelli, A.; Barone, M.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Barrillon, P.; Barriuso Poy, A.; Barros, N.; Bartheld, V.; Bartko, H.; Bartoldus, R.; Basiladze, S.; Bastos, J.; Batchelor, L. E.; Bates, R. L.; Batley, J. R.; Batraneanu, S.; Battistin, M.; Battistoni, G.; Batusov, V.; Bauer, F.; Bauss, B.; Baynham, D. E.; Bazalova, M.; Bazan, A.; Beauchemin, P. H.; Beaugiraud, B.; Beccherle, R. B.; Beck, G. A.; Beck, H. P.; Becks, K. H.; Bedajanek, I.; Beddall, A. J.; Beddall, A.; Bednár, P.; Bednyakov, V. A.; Bee, C.; Behar Harpaz, S.; Belanger, G. A. N.; Belanger-Champagne, C.; Belhorma, B.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellachia, F.; Bellagamba, L.; Bellina, F.; Bellomo, G.; Bellomo, M.; Beltramello, O.; Belymam, A.; Ben Ami, S.; Ben Moshe, M.; Benary, O.; Benchekroun, D.; Benchouk, C.; Bendel, M.; Benedict, B. H.; Benekos, N.; Benes, J.; Benhammou, Y.; Benincasa, G. P.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Beretta, M.; Berge, D.; Bergeaas, E.; Berger, N.; Berghaus, F.; Berglund, S.; Bergsma, F.; Beringer, J.; Bernabéu, J.; Bernardet, K.; Berriaud, C.; Berry, T.; Bertelsen, H.; Bertin, A.; Bertinelli, F.; Bertolucci, S.; Besson, N.; Beteille, A.; Bethke, S.; Bialas, W.; Bianchi, R. M.; Bianco, M.; Biebel, O.; Bieri, M.; Biglietti, M.; Bilokon, H.; Binder, M.; Binet, S.; Bingefors, N.; Bingul, A.; Bini, C.; Biscarat, C.; Bischof, R.; Bischofberger, M.; Bitadze, A.; Bizzell, J. P.; Black, K. M.; Blair, R. E.; Blaising, J. J.; Blanch, O.; Blanchot, G.; Blocker, C.; Blocki, J.; Blondel, A.; Blum, W.; Blumenschein, U.; Boaretto, C.; Bobbink, G. J.; Bocci, A.; Bocian, D.; Bock, R.; Boehm, M.; Boek, J.; Bogaerts, J. A.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Bondarenko, V. G.; Bonino, R.; Bonis, J.; Bonivento, W.; Bonneau, P.; Boonekamp, M.; Boorman, G.; Boosten, M.; Booth, C. N.; Booth, P. S. L.; Booth, P.; Booth, J. R. A.; Borer, K.; Borisov, A.; Borjanovic, I.; Bos, K.; Boscherini, D.; Bosi, F.; Bosman, M.; Bosteels, M.; Botchev, B.; Boterenbrood, H.; Botterill, D.; Boudreau, J.; Bouhova-Thacker, E. V.; Boulahouache, C.; Bourdarios, C.; Boutemeur, M.; Bouzakis, K.; Boyd, G. R.; Boyd, J.; Boyer, B. H.; Boyko, I. R.; Bozhko, N. I.; Braccini, S.; Braem, A.; Branchini, P.; Brandenburg, G. W.; Brandt, A.; Brandt, O.; Bratzler, U.; Braun, H. M.; Bravo, S.; Brawn, I. P.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Breton, D.; Brett, N. D.; Breugnon, P.; Bright-Thomas, P. G.; Brochu, F. M.; Brock, I.; Brock, R.; Brodbeck, T. J.; Brodet, E.; Broggi, F.; Broklova, Z.; Bromberg, C.; Brooijmans, G.; Brouwer, G.; Broz, J.; Brubaker, E.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruni, A.; Bruni, G.; Bruschi, M.; Buanes, T.; Buchanan, N. J.; Buchholz, P.; Budagov, I. A.; Büscher, V.; Bugge, L.; Buira-Clark, D.; Buis, E. J.; Bujor, F.; Buran, T.; Burckhart, H.; Burckhart-Chromek, D.; Burdin, S.; Burns, R.; Busato, E.; Buskop, J. J. F.; Buszello, K. P.; Butin, F.; Butler, J. M.; Buttar, C. M.; Butterworth, J.; Butterworth, J. M.; Byatt, T.; Cabrera Urbán, S.; Cabruja Casas, E.; Caccia, M.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calderón Terol, D.; Callahan, J.; Caloba, L. P.; Caloi, R.; Calvet, D.; Camard, A.; Camarena, F.; Camarri, P.; Cambiaghi, M.; Cameron, D.; Cammin, J.; Campabadal Segura, F.; Campana, S.; Canale, V.; Cantero, J.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Caprio, M.; Caracinha, D.; Caramarcu, C.; Carcagno, Y.; Cardarelli, R.; Cardeira, C.; Cardiel Sas, L.; Cardini, A.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carpentieri, C.; Carr, F. S.; Carter, A. A.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Cascella, M.; Caso, C.; Castelo, J.; Castillo Gimenez, V.; Castro, N.; Castrovillari, F.; Cataldi, G.; Cataneo, F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caughron, S.; Cauz, D.; Cavallari, A.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerna, C.; Cernoch, C.; Cerqueira, A. S.; Cerri, A.; Cerutti, F.; Cervetto, M.; Cetin, S. A.; Cevenini, F.; Chalifour, M.; Chamizo llatas, M.; Chan, A.; Chapman, J. W.; Charlton, D. G.; Charron, S.; Chekulaev, S. V.; Chelkov, G. A.; Chen, H.; Chen, L.; Chen, T.; Chen, X.; Cheng, S.; Cheng, T. L.; Cheplakov, A.; Chepurnov, V. F.; Cherkaoui El Moursli, R.; Chesneanu, D.; Cheu, E.; Chevalier, L.; Chevalley, J. L.; Chevallier, F.; Chiarella, V.; Chiefari, G.; Chikovani, L.; Chilingarov, A.; Chiodini, G.; Chouridou, S.; Chren, D.; Christiansen, T.; Christidi, I. A.; Christov, A.; Chu, M. L.; Chudoba, J.; Chuguev, A. G.; Ciapetti, G.; Cicalini, E.; Ciftci, A. K.; Cindro, V.; Ciobotaru, M. D.; Ciocio, A.; Cirilli, M.; Citterio, M.; Ciubancan, M.; Civera, J. V.; Clark, A.; Cleland, W.; Clemens, J. C.; Clement, B. C.; Clément, C.; Clements, D.; Clifft, R. W.; Cobal, M.; Coccaro, A.; Cochran, J.; Coco, R.; Coe, P.; Coelli, S.; Cogneras, E.; Cojocaru, C. D.; Colas, J.; Colijn, A. P.; Collard, C.; Collins-Tooth, C.; Collot, J.; Coluccia, R.; Comune, G.; Conde Muiño, P.; Coniavitis, E.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F. A.; Cook, J.; Cooke, M.; Cooper-Smith, N. J.; Cornelissen, T.; Corradi, M.; Correard, S.; Corso-Radu, A.; Coss, J.; Costa, G.; Costa, M. J.; Costanzo, D.; Costin, T.; Coura Torres, R.; Courneyea, L.; Couyoumtzelis, C.; Cowan, G.; Cox, B. E.; Cox, J.; Cragg, D. A.; Cranmer, K.; Cranshaw, J.; Cristinziani, M.; Crosetti, G.; Cuenca Almenar, C.; Cuneo, S.; Cunha, A.; Curatolo, M.; Curtis, C. J.; Cwetanski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; Da Rocha Gesualdi Mello, A.; Da Silva, P. V. M.; Da Silva, R.; Dabrowski, W.; Dael, A.; Dahlhoff, A.; Dai, T.; Dallapiccola, C.; Dallison, S. J.; Dalmau, J.; Daly, C. H.; Dam, M.; Damazio, D.; Dameri, M.; Danielsen, K. M.; Danielsson, H. O.; Dankers, R.; Dannheim, D.; Darbo, G.; Dargent, P.; Daum, C.; Dauvergne, J. P.; David, M.; Davidek, T.; Davidson, N.; Davidson, R.; Dawson, I.; Dawson, J. W.; Daya, R. K.; De, K.; de Asmundis, R.; de Boer, R.; DeCastro, S.; DeGroot, N.; de Jong, P.; de La Broise, X.; DeLa Cruz-Burelo, E.; DeLa Taille, C.; DeLotto, B.; DeOliveira Branco, M.; DePedis, D.; de Saintignon, P.; DeSalvo, A.; DeSanctis, U.; DeSanto, A.; DeVivie DeRegie, J. B.; DeZorzi, G.; Dean, S.; Dedes, G.; Dedovich, D. V.; Defay, P. O.; Degele, R.; Dehchar, M.; Deile, M.; DelPapa, C.; DelPeso, J.; DelPrete, T.; Delagnes, E.; Delebecque, P.; Dell'Acqua, A.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delpierre, P.; Delruelle, N.; Delsart, P. A.; Deluca Silberberg, C.; Demers, S.; Demichev, M.; Demierre, P.; Demirköz, B.; Deng, W.; Denisov, S. P.; Dennis, C.; Densham, C. J.; Dentan, M.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K. K.; Dewhurst, A.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Girolamo, A.; Di Girolamo, B.; Di Luise, S.; Di Mattia, A.; Di Simone, A.; Diaz Gomez, M. M.; Diehl, E. B.; Dietl, H.; Dietrich, J.; Dietsche, W.; Diglio, S.; Dima, M.; Dindar, K.; Dinkespiler, B.; Dionisi, C.; Dipanjan, R.; Dita, P.; Dita, S.; Dittus, F.; Dixon, S. D.; Djama, F.; Djilkibaev, R.; Djobava, T.; do Vale, M. A. B.; Dobbs, M.; Dobinson, R.; Dobos, D.; Dobson, E.; Dobson, M.; Dodd, J.; Dogan, O. B.; Doherty, T.; Doi, Y.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B. A.; Domingo, E.; Donega, M.; Dopke, J.; Dorfan, D. E.; Dorholt, O.; Doria, A.; Dos Anjos, A.; Dosil, M.; Dotti, A.; Dova, M. T.; Dowell, J. D.; Doyle, A. T.; Drake, G.; Drakoulakos, D.; Drasal, Z.; Drees, J.; Dressnandt, N.; Drevermann, H.; Driouichi, C.; Dris, M.; Drohan, J. G.; Dubbert, J.; Dubbs, T.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dührssen, M.; Dür, H.; Duerdoth, I. P.; Duffin, S.; Duflot, L.; Dufour, M.-A.; Dumont Dayot, N.; Duran Yildiz, H.; Durand, D.; Dushkin, A.; Duxfield, R.; Dwuznik, M.; Dydak, F.; Dzahini, D.; Díez Cornell, S.; Düren, M.; Ebenstein, W. L.; Eckert, S.; Eckweiler, S.; Eerola, P.; Efthymiopoulos, I.; Egede, U.; Egorov, K.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; Eklund, L. M.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Ely, R.; Emeliyanov, D.; Engelmann, R.; Engström, M.; Ennes, P.; Epp, B.; Eppig, A.; Epshteyn, V. S.; Ereditato, A.; Eremin, V.; Eriksson, D.; Ermoline, I.; Ernwein, J.; Errede, D.; Errede, S.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Esteves, F.; Etienne, F.; Etienvre, A. I.; Etzion, E.; Evans, H.; Evdokimov, V. N.; Evtoukhovitch, P.; Eyring, A.; Fabbri, L.; Fabjan, C. W.; Fabre, C.; Faccioli, P.; Facius, K.; Fadeyev, V.; Fakhrutdinov, R. M.; Falciano, S.; Falleau, I.; Falou, A. C.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farrell, J.; Farthouat, P.; Fasching, D.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Fawzi, F.; Fayard, L.; Fayette, F.; Febbraro, R.; Fedin, O. L.; Fedorko, I.; Feld, L.; Feldman, G.; Feligioni, L.; Feng, C.; Feng, E. J.; Fent, J.; Fenyuk, A. B.; Ferencei, J.; Ferguson, D.; Ferland, J.; Fernando, W.; Ferrag, S.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M. L.; Ferrere, D.; Ferretti, C.; Ferro, F.; Fiascaris, M.; Fichet, S.; Fiedler, F.; Filimonov, V.; Filipčič, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Finocchiaro, G.; Fiorini, L.; Firan, A.; Fischer, P.; Fisher, M. J.; Fisher, S. M.; Flaminio, V.; Flammer, J.; Flechl, M.; Fleck, I.; Flegel, W.; Fleischmann, P.; Fleischmann, S.; Fleta Corral, C. M.; Fleuret, F.; Flick, T.; Flix, J.; Flores Castillo, L. R.; Flowerdew, M. J.; Föhlisch, F.; Fokitis, M.; Fonseca Martin, T. M.; Fopma, J.; Forbush, D. A.; Formica, A.; Foster, J. M.; Fournier, D.; Foussat, A.; Fowler, A. J.; Fox, H.; Francavilla, P.; Francis, D.; Franz, S.; Fraser, J. T.; Fraternali, M.; Fratianni, S.; Freestone, J.; French, R. S.; Fritsch, K.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fulachier, J.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Gallas, E. J.; Gallas, M. V.; Gallop, B. J.; Gan, K. K.; Gannaway, F. C.; Gao, Y. S.; Gapienko, V. A.; Gaponenko, A.; Garciá, C.; Garcia-Sciveres, M.; Garcìa Navarro, J. E.; Garde, V.; Gardner, R. W.; Garelli, N.; Garitaonandia, H.; Garonne, V. G.; Garvey, J.; Gatti, C.; Gaudio, G.; Gaumer, O.; Gautard, V.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gayde, J.-C.; Gazis, E. N.; Gazo, E.; Gee, C. N. P.; Geich-Gimbel, C.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Gentile, S.; George, M. A.; George, S.; Gerlach, P.; Gernizky, Y.; Geweniger, C.; Ghazlane, H.; Ghete, V. M.; Ghez, P.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, M. D.; Gibson, S. M.; Gieraltowski, G. F.; Gil Botella, I.; Gilbert, L. M.; Gilchriese, M.; Gildemeister, O.; Gilewsky, V.; Gillman, A. R.; Gingrich, D. M.; Ginzburg, J.; Giokaris, N.; Giordani, M. P.; Girard, C. G.; Giraud, P. F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B. K.; Glasman, C.; Glazov, A.; Glitza, K. W.; Glonti, G. L.; Gnanvo, K. G.; Godlewski, J.; Göpfert, T.; Gössling, C.; Göttfert, T.; Goldfarb, S.; Goldin, D.; Goldschmidt, N.; Golling, T.; Gollub, N. P.; Golonka, P. J.; Golovnia, S. N.; Gomes, A.; Gomes, J.; Gonçalo, R.; Gongadze, A.; Gonidec, A.; Gonzalez, S.; González de la Hoz, S.; González Millán, V.; Gonzalez Silva, M. L.; Gonzalez-Pineiro, B.; González-Sevilla, S.; Goodrick, M. J.; Goodson, J. J.; Goossens, L.; Gorbounov, P. A.; Gordeev, A.; Gordon, H.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Gorokhov, S. A.; Gorski, B. T.; Goryachev, S. V.; Goryachev, V. N.; Gosselink, M.; Gostkin, M. I.; Gouanère, M.; Gough Eschrich, I.; Goujdami, D.; Goulette, M.; Gousakov, I.; Gouveia, J.; Gowdy, S.; Goy, C.; Grabowska-Bold, I.; Grabski, V.; Grafström, P.; Grah, C.; Grahn, K.-J.; Grancagnolo, F.; Grancagnolo, S.; Grassmann, H.; Gratchev, V.; Gray, H. M.; Graziani, E.; Green, B.; Greenall, A.; Greenfield, D.; Greenwood, D.; Gregor, I. M.; Grewal, A.; Griesmayer, E.; Grigalashvili, N.; Grigson, C.; Grillo, A. A.; Grimaldi, F.; Grimm, K.; Gris, P. L. Y.; Grishkevich, Y.; Groenstege, H.; Groer, L. S.; Grognuz, J.; Groh, M.; Gross, E.; Grosse-Knetter, J.; Grothe, M. E. M.; Grudzinski, J.; Gruse, C.; Gruwe, M.; Grybel, K.; Grybos, P.; Gschwendtner, E. M.; Guarino, V. J.; Guicheney, C. J.; Guilhem, G.; Guillemin, T.; Gunther, J.; Guo, B.; Gupta, A.; Gurriana, L.; Gushchin, V. N.; Gutierrez, P.; Guy, L.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haas, S.; Haber, C.; Haboubi, G.; Hackenburg, R.; Hadash, E.; Hadavand, H. K.; Haeberli, C.; Härtel, R.; Haggerty, R.; Hahn, F.; Haider, S.; Hajduk, Z.; Hakimi, M.; Hakobyan, H.; Hakobyan, H.; Haller, J.; Hallewell, G. D.; Hallgren, B.; Hamacher, K.; Hamilton, A.; Han, H.; Han, L.; Hanagaki, K.; Hance, M.; Hanke, P.; Hansen, C. J.; Hansen, F. H.; Hansen, J. R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hansl-Kozanecka, T.; Hanson, G.; Hansson, P.; Hara, K.; Harder, S.; Harel, A.; Harenberg, T.; Harper, R.; Hart, J. C.; Hart, R. G. G.; Hartjes, F.; Hartman, N.; Haruyama, T.; Harvey, A.; Hasegawa, Y.; Hashemi, K.; Hassani, S.; Hatch, M.; Hatley, R. W.; Haubold, T. G.; Hauff, D.; Haug, F.; Haug, S.; Hauschild, M.; Hauser, R.; Hauviller, C.; Havranek, M.; Hawes, B. M.; Hawkings, R. J.; Hawkins, D.; Hayler, T.; Hayward, H. S.; Haywood, S. J.; Hazen, E.; He, M.; He, Y. P.; Head, S. J.; Hedberg, V.; Heelan, L.; Heinemann, F. E. W.; Heldmann, M.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Hendriks, P. J.; Henriques Correia, A. M.; Henrot-Versille, S.; Henry-Couannier, F.; Henß, T.; Herten, G.; Hertenberger, R.; Hervas, L.; Hess, M.; Hessey, N. P.; Hicheur, A.; Hidvegi, A.; Higón-Rodriguez, E.; Hill, D.; Hill, J.; Hill, J. C.; Hill, N.; Hillier, S. J.; Hinchliffe, I.; Hindson, D.; Hinkelbein, C.; Hodges, T. A.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, A. E.; Hoffmann, D.; Hoffmann, H. F.; Holder, M.; Hollins, T. I.; Hollyman, G.; Holmes, A.; Holmgren, S. O.; Holt, R.; Holtom, E.; Holy, T.; Homer, R. J.; Homma, Y.; Homola, P.; Honerbach, W.; Honma, A.; Hooton, I.; Horazdovsky, T.; Horn, C.; Horvat, S.; Hostachy, J.-Y.; Hott, T.; Hou, S.; Houlden, M. A.; Hoummada, A.; Hover, J.; Howell, D. F.; Hrivnac, J.; Hruska, I.; Hryn'ova, T.; Huang, G. S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, B. T.; Hughes, E.; Hughes, G.; Hughes-Jones, R. E.; Hulsbergen, W.; Hurst, P.; Hurwitz, M.; Huse, T.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Ibbotson, M.; Ibragimov, I.; Ichimiya, R.; Iconomidou-Fayard, L.; Idarraga, J.; Idzik, M.; Iengo, P.; Iglesias Escudero, M. C.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Ilyushenka, Y.; Imbault, D.; Imbert, P.; Imhaeuser, M.; Imori, M.; Ince, T.; Inigo-Golfin, J.; Inoue, K.; Ioannou, P.; Iodice, M.; Ionescu, G.; Ishii, K.; Ishino, M.; Ishizawa, Y.; Ishmukhametov, R.; Issever, C.; Ito, H.; Ivashin, A. V.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, J.; Jackson, J. N.; Jaekel, M.; Jagielski, S.; Jahoda, M.; Jain, V.; Jakobs, K.; Jakubek, J.; Jansen, E.; Jansweijer, P. P. M.; Jared, R. C.; Jarlskog, G.; Jarp, S.; Jarron, P.; Jelen, K.; Jen-La Plante, I.; Jenni, P.; Jeremie, A.; Jez, P.; Jézéquel, S.; Jiang, Y.; Jin, G.; Jin, S.; Jinnouchi, O.; Joffe, D.; Johansen, L. G.; Johansen, M.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, M.; Jones, R.; Jones, R. W. L.; Jones, T. W.; Jones, T. J.; Jones, A.; Jonsson, O.; Joo, K. K.; Joos, D.; Joos, M.; Joram, C.; Jorgensen, S.; Joseph, J.; Jovanovic, P.; Junnarkar, S. S.; Juranek, V.; Jussel, P.; Kabachenko, V. V.; Kabana, S.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagawa, S.; Kaiser, S.; Kajomovitz, E.; Kakurin, S.; Kalinovskaya, L. V.; Kama, S.; Kambara, H.; Kanaya, N.; Kandasamy, A.; Kandasamy, S.; Kaneda, M.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Karagounis, M.; Karagoz Unel, M.; Karr, K.; Karst, P.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasmi, A.; Kass, R. D.; Kastanas, A.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katunin, S.; Kawagoe, K.; Kawai, M.; Kawamoto, T.; Kayumov, F.; Kazanin, V. A.; Kazarinov, M. Y.; Kazarov, A.; Kazi, S. I.; Keates, J. R.; Keeler, R.; Keener, P. T.; Kehoe, R.; Keil, M.; Kekelidze, G. D.; Kelly, M.; Kennedy, J.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kerševan, B. P.; Kersten, S.; Ketterer, C.; Khakzad, M.; Khalilzade, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Kholodenko, A. G.; Khomich, A.; Khomutnikov, V. P.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kieft, G.; Kierstead, J. A.; Kilvington, G.; Kim, H.; Kim, H.; Kim, S. H.; Kind, P.; King, B. T.; Kirk, J.; Kirsch, G. P.; Kirsch, L. E.; Kiryunin, A. E.; Kisielewska, D.; Kisielewski, B.; Kittelmann, T.; Kiver, A. M.; Kiyamura, H.; Kladiva, E.; Klaiber-Lodewigs, J.; Kleinknecht, K.; Klier, A.; Klimentov, A.; Kline, C. R.; Klingenberg, R.; Klinkby, E. B.; Klioutchnikova, T.; Klok, P. F.; Klous, S.; Kluge, E.-E.; Kluit, P.; Klute, M.; Kluth, S.; Knecht, N. K.; Kneringer, E.; Knezo, E.; Knobloch, J.; Ko, B. R.; Kobayashi, T.; Kobel, M.; Kodys, P.; König, A. C.; König, S.; Köpke, L.; Koetsveld, F.; Koffas, T.; Koffeman, E.; Kohout, Z.; Kohriki, T.; Kokott, T.; Kolachev, G. M.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Kollefrath, M.; Kolos, S.; Kolya, S. D.; Komar, A. A.; Komaragiri, J. R.; Kondo, T.; Kondo, Y.; Kondratyeva, N. V.; Kono, T.; Kononov, A. I.; Konoplich, R.; Konovalov, S. P.; Konstantinidis, N.; Kootz, A.; Koperny, S.; Kopikov, S. V.; Korcyl, K.; Kordas, K.; Koreshev, V.; Korn, A.; Korolkov, I.; Korotkov, V. A.; Korsmo, H.; Kortner, O.; Kostrikov, M. E.; Kostyukhin, V. V.; Kotamäki, M. J.; Kotchetkov, D.; Kotov, S.; Kotov, V. M.; Kotov, K. Y.; Kourkoumelis, C.; Koutsman, A.; Kovalenko, S.; Kowalewski, R.; Kowalski, H.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V.; Kramberger, G.; Kramer, A.; Krasel, O.; Krasny, M. W.; Krasznahorkay, A.; Krepouri, A.; Krieger, P.; Krivkova, P.; Krobath, G.; Kroha, H.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruger, K.; Krumshteyn, Z. V.; Kubik, P.; Kubischta, W.; Kubota, T.; Kudin, L. G.; Kudlaty, J.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kundu, N.; Kupco, A.; Kupper, M.; Kurashige, H.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kus, V.; Kuykendall, W.; Kuzhir, P.; Kuznetsova, E. K.; Kvasnicka, O.; Kwee, R.; La Marra, D.; La Rosa, M.; La Rotonda, L.; Labarga, L.; Labbe, J. A.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lamanna, E.; Lambacher, M.; Lambert, F.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M. P. J.; Landsman, H.; Langstaff, R. R.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Lapin, V. V.; Laplace, S.; Laporte, J. F.; Lara, V.; Lari, T.; Larionov, A. V.; Lasseur, C.; Lau, W.; Laurelli, P.; Lavorato, A.; Lavrijsen, W.; Lazarev, A. B.; LeBihan, A.-C.; LeDortz, O.; LeManer, C.; LeVine, M.; Leahu, L.; Leahu, M.; Lebel, C.; Lechowski, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lefebvre, M.; Lefevre, R. P.; Legendre, M.; Leger, A.; LeGeyt, B. C.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lehto, M.; Leitner, R.; Lelas, D.; Lellouch, D.; Leltchouk, M.; Lendermann, V.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lepidis, J.; Leroy, C.; Lessard, J.-R.; Lesser, J.; Lester, C. G.; Letheren, M.; Fook Cheong, A. Leung; Levêque, J.; Levin, D.; Levinson, L. J.; Levitski, M. S.; Lewandowska, M.; Leyton, M.; Li, J.; Li, W.; Liabline, M.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Liebig, W.; Lifshitz, R.; Liko, D.; Lim, H.; Limper, M.; Lin, S. C.; Lindahl, A.; Linde, F.; Lindquist, L.; Lindsay, S. W.; Linhart, V.; Lintern, A. J.; Liolios, A.; Lipniacka, A.; Liss, T. M.; Lissauer, A.; List, J.; Litke, A. M.; Liu, S.; Liu, T.; Liu, Y.; Livan, M.; Lleres, A.; Llosá Llácer, G.; Lloyd, S. L.; Lobkowicz, F.; Loch, P.; Lockman, W. S.; Loddenkoetter, T.; Loebinger, F. K.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Loken, J.; Lokwitz, S.; Long, M. C.; Lopes, L.; Lopez Mateos, D.; Losty, M. J.; Lou, X.; Loureiro, K. F.; Lovas, L.; Love, J.; Lowe, A.; Lozano Fantoba, M.; Lu, F.; Lu, J.; Lu, L.; Lubatti, H. J.; Lucas, S.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, I.; Ludwig, J.; Luehring, F.; Lüke, D.; Luijckx, G.; Luisa, L.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundquist, J.; Lupi, A.; Lupu, N.; Lutz, G.; Lynn, D.; Lynn, J.; Lys, J.; Lysan, V.; Lytken, E.; López-Amengual, J. M.; Ma, H.; Ma, L. L.; Maaß en, M.; Maccarrone, G.; Mace, G. G. R.; Macina, D.; Mackeprang, R.; Macpherson, A.; MacQueen, D.; Macwaters, C.; Madaras, R. J.; Mader, W. F.; Maenner, R.; Maeno, T.; Mättig, P.; Mättig, S.; Magrath, C. A.; Mahalalel, Y.; Mahboubi, K.; Mahout, G.; Maidantchik, C.; Maio, A.; Mair, G. M.; Mair, K.; Makida, Y.; Makowiecki, D.; Malecki, P.; Maleev, V. P.; Malek, F.; Malon, D.; Maltezos, S.; Malychev, V.; Malyukov, S.; Mambelli, M.; Mameghani, R.; Mamuzic, J.; Manabe, A.; Manara, A.; Manca, G.; Mandelli, L.; Mandić, I.; Mandl, M.; Maneira, J.; Maneira, M.; Mangeard, P. S.; Mangin-Brinet, M.; Manjavidze, I. D.; Mann, W. A.; Manolopoulos, S.; Manousakis-Katsikakis, A.; Mansoulie, B.; Manz, A.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J. F.; Marchesotti, M.; Marcisovsky, M.; Marin, A.; Marques, C. N.; Marroquim, F.; Marshall, R.; Marshall, Z.; Martens, F. K.; Garcia, S. Marti i.; Martin, A. J.; Martin, B.; Martin, B.; Martin, F. F.; Martin, J. P.; Martin, Ph; Martinez, G.; Martínez Lacambra, C.; Martinez Outschoorn, V.; Martini, A.; Martins, J.; Maruyama, T.; Marzano, F.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Maß, M.; Massa, I.; Massaro, G.; Massol, N.; Mathes, M.; Matheson, J.; Matricon, P.; Matsumoto, H.; Matsunaga, H.; Maugain, J. M.; Maxfield, S. J.; May, E. N.; Mayer, J. K.; Mayri, C.; Mazini, R.; Mazzanti, M.; Mazzanti, P.; Mazzoni, E.; Mazzucato, F.; McKee, S. P.; McCarthy, R. L.; McCormick, C.; McCubbin, N. A.; McDonald, J.; McFarlane, K. W.; McGarvie, S.; McGlone, H.; McLaren, R. A.; McMahon, S. J.; McMahon, T. R.; McMahon, T. J.; McPherson, R. A.; Mechtel, M.; Meder-Marouelli, D.; Medinnis, M.; Meera-Lebbai, R.; Meessen, C.; Mehdiyev, R.; Mehta, A.; Meier, K.; Meinhard, H.; Meinhardt, J.; Meirosu, C.; Meisel, F.; Melamed-Katz, A.; Mellado Garcia, B. R.; Mendes Jorge, P.; Mendez, P.; Menke, S.; Menot, C.; Meoni, E.; Merkl, D.; Merola, L.; Meroni, C.; Merritt, F. S.; Messmer, I.; Metcalfe, J.; Meuser, S.; Meyer, J.-P.; Meyer, T. C.; Meyer, W. T.; Mialkovski, V.; Michelotto, M.; Micu, L.; Middleton, R.; Miele, P.; Migliaccio, A.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikestikova, M.; Mikulec, B.; Mikuž, M.; Miller, D. W.; Miller, R. J.; Miller, W.; Milosavljevic, M.; Milstead, D. A.; Mima, S.; Minaenko, A. A.; Minano, M.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Mir, L. M.; Mirabelli, G.; Miralles Verge, L.; Misawa, S.; Miscetti, S.; Misiejuk, A.; Mitra, A.; Mitrofanov, G. Y.; Mitsou, V. A.; Miyagawa, P. S.; Miyazaki, Y.; Mjörnmark, J. U.; Mkrtchyan, S.; Mladenov, D.; Moa, T.; Moch, M.; Mochizuki, A.; Mockett, P.; Modesto, P.; Moed, S.; Mönig, K.; Möser, N.; Mohn, B.; Mohr, W.; Mohrdieck-Möck, S.; Moisseev, A. M.; Moles Valls, R. M.; Molina-Perez, J.; Moll, A.; Moloney, G.; Mommsen, R.; Moneta, L.; Monnier, E.; Montarou, G.; Montesano, S.; Monticelli, F.; Moore, R. W.; Moore, T. B.; Moorhead, G. F.; Moraes, A.; Morel, J.; Moreno, A.; Moreno, D.; Morettini, P.; Morgan, D.; Morii, M.; Morin, J.; Morley, A. K.; Mornacchi, G.; Morone, M.-C.; Morozov, S. V.; Morris, E. J.; Morris, J.; Morrissey, M. C.; Moser, H. G.; Mosidze, M.; Moszczynski, A.; Mouraviev, S. V.; Mouthuy, T.; Moye, T. H.; Moyse, E. J. W.; Mueller, J.; Müller, M.; Muijs, A.; Muller, T. R.; Munar, A.; Munday, D. J.; Murakami, K.; Murillo Garcia, R.; Murray, W. J.; Myagkov, A. G.; Myska, M.; Nagai, K.; Nagai, Y.; Nagano, K.; Nagasaka, Y.; Nairz, A. M.; Naito, D.; Nakamura, K.; Nakamura, Y.; Nakano, I.; Nanava, G.; Napier, A.; Nassiakou, M.; Nasteva, I.; Nation, N. R.; Naumann, T.; Nauyock, F.; Nderitu, S. K.; Neal, H. A.; Nebot, E.; Nechaeva, P.; Neganov, A.; Negri, A.; Negroni, S.; Nelson, C.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Nesterov, S. Y.; Neukermans, L.; Nevski, P.; Newcomer, F. M.; Nichols, A.; Nicholson, C.; Nicholson, R.; Nickerson, R. B.; Nicolaidou, R.; Nicoletti, G.; Nicquevert, B.; Niculescu, M.; Nielsen, J.; Niinikoski, T.; Niinimaki, M. J.; Nikitin, N.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, B. S.; Nilsson, P.; Nisati, A.; Nisius, R.; Nodulman, L. J.; Nomachi, M.; Nomoto, H.; Noppe, J.-M.; Nordberg, M.; Norniella Francisco, O.; Norton, P. R.; Novakova, J.; Nowak, M.; Nozaki, M.; Nunes, R.; Nunes Hanninger, G.; Nunnemann, T.; Nyman, T.; O'Connor, P.; O'Neale, S. W.; O'Neil, D. C.; O'Neill, M.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermaier, M.; Oberson, P.; Ochi, A.; Ockenfels, W.; Odaka, S.; Odenthal, I.; Odino, G. A.; Ogren, H.; Oh, S. H.; Ohshima, T.; Ohshita, H.; Okawa, H.; Olcese, M.; Olchevski, A. G.; Oliver, C.; Oliver, J.; Olivo Gomez, M.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onea, A.; Onofre, A.; Oram, C. J.; Ordonez, G.; Oreglia, M. J.; Orellana, F.; Oren, Y.; Orestano, D.; Orlov, I. O.; Orr, R. S.; Orsini, F.; Osborne, L. S.; Osculati, B.; Osuna, C.; Otec, R.; Othegraven, R.; Ottewell, B.; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Øye, O. K.; Ozcan, V. E.; Ozone, K.; Ozturk, N.; Pacheco Pages, A.; Padhi, S.; Padilla Aranda, C.; Paganis, E.; Paige, F.; Pailler, P. M.; Pajchel, K.; Palestini, S.; Palla, J.; Pallin, D.; Palmer, M. J.; Pan, Y. B.; Panikashvili, N.; Panin, V. N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Paoloni, A.; Papadopoulos, I.; Papadopoulou, T.; Park, I.; Park, W.; Parker, M. A.; Parker, S.; Parkman, C.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passardi, G.; Passeri, A.; Passmore, M. S.; Pastore, F.; Pastore, Fr; Pataraia, S.; Pate, D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pauna, E.; Peak, L. S.; Peeters, S. J. M.; Peez, M.; Pei, E.; Peleganchuk, S. V.; Pellegrini, G.; Pengo, R.; Pequenao, J.; Perantoni, M.; Perazzo, A.; Pereira, A.; Perepelkin, E.; Perera, V. J. O.; Perez Codina, E.; Perez Reale, V.; Peric, I.; Perini, L.; Pernegger, H.; Perrin, E.; Perrino, R.; Perrodo, P.; Perrot, G.; Perus, P.; Peshekhonov, V. D.; Petereit, E.; Petersen, J.; Petersen, T. C.; Petit, P. J. F.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petti, R.; Pezzetti, M.; Pfeifer, B.; Phan, A.; Phillips, A. W.; Phillips, P. W.; Piacquadio, G.; Piccinini, M.; Pickford, A.; Piegaia, R.; Pier, S.; Pilcher, J. E.; Pilkington, A. D.; Pimenta Dos Santos, M. A.; Pina, J.; Pinfold, J. L.; Ping, J.; Pinhão, J.; Pinto, B.; Pirotte, O.; Placakyte, R.; Placci, A.; Plamondon, M.; Plano, W. G.; Pleier, M.-A.; Pleskach, A. V.; Podkladkin, S.; Podlyski, F.; Poffenberger, P.; Poggioli, L.; Pohl, M.; Polak, I.; Polesello, G.; Policicchio, A.; Polini, A.; Polychronakos, V.; Pomarede, D. M.; Pommès, K.; Ponsot, P.; Pontecorvo, L.; Pope, B. G.; Popescu, R.; Popovic, D. S.; Poppleton, A.; Popule, J.; Portell Bueso, X.; Posch, C.; Pospelov, G. E.; Pospichal, P.; Pospisil, S.; Postranecky, M.; Potrap, I. N.; Potter, C. J.; Poulard, G.; Pousada, A.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Prast, J.; Prat, S.; Prata, M.; Pravahan, R.; Preda, T.; Pretzl, K.; Pribyl, L.; Price, D.; Price, L. E.; Price, M. J.; Prichard, P. M.; Prieur, D.; Primavera, M.; Primor, D.; Prokofiev, K.; Prosso, E.; Proudfoot, J.; Przysiezniak, H.; Puigdengoles, C.; Purdham, J.; Purohit, M.; Puzo, P.; Pylaev, A. N.; Pylypchenko, Y.; Qi, M.; Qian, J.; Qian, W.; Qian, Z.; Qing, D.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Rabbers, J. J.; Radeka, V.; Rafi, J. M.; Ragusa, F.; Rahimi, A. M.; Rahm, D.; Raine, C.; Raith, B.; Rajagopalan, S.; Rajek, S.; Rammer, H.; Ramstedt, M.; Rangod, S.; Ratoff, P. N.; Raufer, T.; Rauscher, F.; Rauter, E.; Raymond, M.; Reads, A. L.; Rebuzzi, D.; Redlinger, G. R.; Reeves, K.; Rehak, M.; Reichold, A.; Reinherz-Aronis, E.; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z.; Renaudin-Crepe, S. R. C.; Renkel, P.; Rensch, B.; Rescia, S.; Rescigno, M.; Resconi, S.; Resende, B.; Rewiersma, P.; Rey, J.; Rey-Campagnolle, M.; Rezaie, E.; Reznicek, P.; Richards, R. A.; Richer, J.-P.; Richter, R. H.; Richter, R.; Richter-Was, E.; Ridel, M.; Riegler, W.; Rieke, S.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rios, R. R.; Riu Dachs, I.; Rivline, M.; Rivoltella, G.; Rizatdinova, F.; Robertson, S. H.; Robichaud-Veronneau, A.; Robins, S.; Robinson, D.; Robson, A.; Rochford, J. H.; Roda, C.; Rodier, S.; Roe, S.; Røhne, O.; Rohrbach, F.; Roldán, J.; Rolli, S.; Romance, J. B.; Romaniouk, A.; Romanov, V. M.; Romeo, G.; Roos, L.; Ros, E.; Rosati, S.; Rosenbaum, F.; Rosenbaum, G. A.; Rosenberg, E. I.; Rosselet, L.; Rossi, L. P.; Rossi, L.; Rotaru, M.; Rothberg, J.; Rottländer, I.; Rousseau, D.; Rozanov, A.; Rozen, Y.; Ruber, R.; Ruckert, B.; Rudolph, G.; Rühr, F.; Ruggieri, F.; Ruggiero, G.; Ruiz, H.; Ruiz-Martinez, A.; Rulikowska-Zarebska, E.; Rumiantsev, V.; Rumyantsev, L.; Runge, K.; Runolfsson, O.; Rusakovich, N. A.; Rust, D. R.; Rutherfoord, J. P.; Ruwiedel, C.; Ryabov, Y. F.; Ryadovikov, V.; Ryan, P.; Rybkine, G.; da Costa, J. Sá; Saavedra, A. F.; Saboumazrag, S.; F-W Sadrozinski, H.; Sadykov, R.; Sakamoto, H.; Sala, P.; Salamon, A.; Saleem, M.; Salihagic, D.; Salt, J.; Saltó Bauza, O.; Salvachúa Ferrando, B. M.; Salvatore, D.; Salzburger, A.; Sampsonidis, D.; Samset, B. H.; Sánchez Sánchez, C. A.; Sanchis Lozano, M. A.; Sanchis Peris, E.; Sandaker, H.; Sander, H. G.; Sandhoff, M.; Sandvoss, S.; Sankey, D. P. C.; Sanny, B.; Sansone, S.; Sansoni, A.; Santamarina Rios, C.; Santander, J.; Santi, L.; Santoni, C.; Santonico, R.; Santos, J.; Sapinski, M.; Saraiva, J. G.; Sarri, F.; Sasaki, O.; Sasaki, T.; Sasao, N.; Satsounkevitch, I.; Sauvage, D.; Sauvage, G.; Savard, P.; Savine, A. Y.; Savinov, V.; Savoy-Navarro, A.; Savva, P.; Saxon, D. H.; Says, L. P.; Sbarra, C.; Sbrissa, E.; Sbrizzi, A.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schaller, M.; Schamov, A. G.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schick, H.; Schieck, J.; Schieferdecker, P.; Schioppa, M.; Schlager, G.; Schlenker, S.; Schlereth, J. L.; Schmid, P.; Schmidt, M. P.; Schmitt, C.; Schmitt, K.; Schmitz, M.; Schmücker, H.; Schoerner, T.; Scholte, R. C.; Schott, M.; Schouten, D.; Schram, M.; Schricker, A.; Schroff, D.; Schuh, S.; Schuijlenburg, H. W.; Schuler, G.; Schultes, J.; Schultz-Coulon, H.-C.; Schumacher, J.; Schumacher, M.; Schune, Ph; Schwartzman, A.; Schweiger, D.; Schwemling, Ph; Schwick, C.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W. G.; Secker, H.; Sedykh, E.; Seguin-Moreau, N.; Segura, E.; Seidel, S. C.; Seiden, A.; Seixas, J. M.; Sekhniaidze, G.; Seliverstov, D. M.; Selldén, B.; Seman, M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M. E.; Sexton, K. A.; Sfyrla, A.; Shah, T. P.; Shan, L.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaver, L.; Shaw, C.; Shears, T. G.; Sherwood, P.; Shibata, A.; Shield, P.; Shilov, S.; Shimojima, M.; Shin, T.; Shiyakova, M.; Shmeleva, A.; Shoa, M.; Shochet, M. J.; Shupe, M. A.; Sicho, P.; Sidoti, A.; Siebel, A.; Siebel, M.; Siegrist, J.; Sijacki, D.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, Lj; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sipica, V.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S.; Sjölin, J.; Skubic, P.; Skvorodnev, N.; Slattery, P.; Slavicek, T.; Sliwa, K.; Sloan, T. J.; Sloper, J.; Smakhtin, V.; Small, A.; Smirnov, S. Yu; Smirnov, Y.; Smirnova, L.; Smirnova, O.; Smith, N. A.; Smith, B. C.; Smith, D. S.; Smith, J.; Smith, K. M.; Smith, B.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snow, S. W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Soares, S.; Sobie, R.; Sodomka, J.; Söderberg, M.; Soffer, A.; Solans, C. A.; Solar, M.; Sole, D.; Solfaroli Camillocci, E.; Solodkov, A. A.; Solov'yanov, O. V.; Soloviev, I.; Soluk, R.; Sondericker, J.; Sopko, V.; Sopko, B.; Sorbi, M.; Soret Medel, J.; Sosebee, M.; Sosnovtsev, V. V.; Sospedra Suay, L.; Soukharev, A.; Soukup, J.; Spagnolo, S.; Spano, F.; Speckmayer, P.; Spegel, M.; Spencer, E.; Spighi, R.; Spigo, G.; Spila, F.; Spiriti, E.; Spiwoks, R.; Spogli, L.; Spousta, M.; Sprachmann, G.; Spurlock, B.; St. Denis, R. D.; Stahl, T.; Staley, R. J.; Stamen, R.; Stancu, S. N.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stapnes, S.; Starchenko, E. A.; Staroba, P.; Stastny, J.; Staude, A.; Stavina, P.; Stavrianakou, M.; Stavropoulos, G.; Stefanidis, E.; Steffens, J. L.; Stekl, I.; Stelzer, H. J.; Stenzel, H.; Stewart, G.; Stewart, T. D.; Stiller, W.; Stockmanns, T.; Stodulski, M.; Stonjek, S.; Stradling, A.; Straessner, A.; Strandberg, J.; Strandlie, A.; Strauss, M.; Strickland, V.; Striegel, D.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Strong, J. A.; Stroynowski, R.; Stugu, B.; Stumer, I.; Su, D.; Subramania, S.; Suchkov, S. I.; Sugaya, Y.; Sugimoto, T.; Suk, M.; Sulin, V. V.; Sultanov, S.; Sun, Z.; Sundal, B.; Sushkov, S.; Susinno, G.; Sutcliffe, P.; Sutton, M. R.; Sviridov, Yu M.; Sykora, I.; Szczygiel, R. R.; Szeless, B.; Szymocha, T.; Sánchez, J.; Ta, D.; Taboada Gameiro, S.; Tadel, M.; Tafirout, R.; Taga, A.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M. C.; Tanaka, J.; Tanaka, K.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanaka, Y.; Tappern, G. P.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tarrant, J.; Tartarelli, G.; Tas, P.; Tasevsky, M.; Tayalati, Y.; Taylor, F. E.; Taylor, G.; Taylor, G. N.; Taylor, R. P.; Tcherniatine, V.; Tegenfeldt, F.; Teixeira-Dias, P.; Ten Kate, H.; Teng, P. K.; Ter-Antonyan, R.; Terada, S.; Terron, J.; Terwort, M.; Teuscher, R. J.; Tevlin, C. M.; Thadome, J.; Thion, J.; Thioye, M.; Thomas, A.; Thomas, J. P.; Thomas, T. L.; Thomas, E.; Thompson, R. J.; Thompson, A. S.; Thun, R. P.; Tic, T.; Tikhomirov, V. O.; Tikhonov, Y. A.; Timm, S.; Timmermans, C. J. W. P.; Tipton, P.; Tique Aires Viegas, F. J.; Tisserant, S.; Titov, M.; Tobias, J.; Tocut, V. M.; Toczek, B.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tomasek, L.; Tomasek, M.; Tomasz, F.; Tomoto, M.; Tompkins, D.; Tompkins, L.; Toms, K.; Tonazzo, A.; Tong, G.; Tonoyan, A.; Topfel, C.; Topilin, N. D.; Torrence, E.; Torres Pais, J. G.; Toth, J.; Touchard, F.; Tovey, D. R.; Tovey, S. N.; Towndrow, E. F.; Trefzger, T.; Treichel, M.; Treis, J.; Tremblet, L.; Tribanek, W.; Tricoli, A.; Trigger, I. M.; Trilling, G.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trka, Z.; Trocmé, B.; Troncon, C.; C-L Tseng, J.; Tsiafis, I.; Tsiareshka, P. V.; Tsipolitis, G.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Turala, M.; Turk Cakir, I.; Turlay, E.; Tuts, P. M.; Twomey, M. S.; Tyndel, M.; Typaldos, D.; Tyrvainen, H.; Tzamarioudaki, E.; Tzanakos, G.; Ueda, I.; Uhrmacher, M.; Ukegawa, F.; Ullán Comes, M.; Unal, G.; Underwood, D. G.; Undrus, A.; Unel, G.; Unno, Y.; Urkovsky, E.; Usai, G.; Usov, Y.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valderanis, C.; Valenta, J.; Valente, P.; Valero, A.; Valkar, S.; Valls Ferrer, J. A.; Van der Bij, H.; van der Graaf, H.; van der Kraaij, E.; Van Eijk, B.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Van Berg, R.; Vandelli, W.; Vandoni, G.; Vaniachine, A.; Vannucci, F.; Varanda, M.; Varela Rodriguez, F.; Vari, R.; Varnes, E. W.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vassilakopoulos, V. I.; Vassilieva, L.; Vataga, E.; Vaz, L.; Vazeille, F.; Vedrine, P.; Vegni, G.; Veillet, J. J.; Vellidis, C.; Veloso, F.; Veness, R.; Veneziano, S.; Ventura, A.; Ventura, S.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vertogardov, L.; Vetterli, M. C.; Vichou, I.; Vickey, T.; Viehhauser, G. H. A.; Vigeolas, E.; Villa, M.; Villani, E. G.; Villate, J.; Villella, I.; Vilucchi, E.; Vincent, P.; Vincke, H.; Vincter, M. G.; Vinogradov, V. B.; Virchaux, M.; Viret, S.; Virzi, J.; Vitale, A.; Vivarelli, I.; Vives, R.; Vives Vaques, F.; Vlachos, S.; Vogt, H.; Vokac, P.; Vollmer, C. F.; Volpi, M.; Volpini, G.; von Boehn-Buchholz, R.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobiev, A. P.; Vorozhtsov, A. S.; Vorozhtsov, S. B.; Vos, M.; Voss, K. C.; Voss, R.; Vossebeld, J. H.; Vovenko, A. S.; Vranjes, N.; Vrba, V.; Vreeswijk, M.; Anh, T. Vu; Vuaridel, B.; Vudragovic, M.; Vuillemin, V.; Vuillermet, R.; Wänanen, A.; Wahlen, H.; Walbersloh, J.; Walker, R.; Walkowiak, W.; Wall, R.; Wallny, R. S.; Walsh, S.; Wang, C.; Wang, J. C.; Wappler, F.; Warburton, A.; Ward, C. P.; Warner, G. P.; Warren, M.; Warsinsky, M.; Wastie, R.; Watkins, P. M.; Watson, A. T.; Watts, G.; Waugh, A. T.; Waugh, B. M.; Weaverdyck, C.; Webel, M.; Weber, G.; Weber, J.; Weber, M.; Weber, P.; Weidberg, A. R.; Weilhammer, P. M.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wellisch, H. P.; Wells, P. S.; Wemans, A.; Wen, M.; Wenaus, T.; Wendler, S.; Wengler, T.; Wenig, S.; Wermes, N.; Werneke, P.; Werner, P.; Werthenbach, U.; Wheeler-Ellis, S. J.; Whitaker, S. P.; White, A.; White, M. J.; White, S.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiesmann, M.; Wiesmann, M.; Wijnen, T.; Wildauer, A.; Wilhelm, I.; Wilkens, H. G.; Williams, H. H.; Willis, W.; Willocq, S.; Wilmut, I.; Wilson, J. A.; Wilson, A.; Wingerter-Seez, I.; Winton, L.; Witzeling, W.; Wlodek, T.; Woehrling, E.; Wolter, M. W.; Wolters, H.; Wosiek, B.; Wotschack, J.; Woudstra, M. J.; Wright, C.; Wu, S. L.; Wu, X.; Wuestenfeld, J.; Wunstorf, R.; Xella-Hansen, S.; Xiang, A.; Xie, S.; Xie, Y.; Xu, G.; Xu, N.; Yamamoto, A.; Yamamoto, S.; Yamaoka, H.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, J. C.; Yang, S.; Yang, U. K.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yao, Y.; Yarradoddi, K.; Yasu, Y.; Ye, J.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, H.; Yoshida, R.; Young, C.; Youssef, S. P.; Yu, D.; Yu, J.; Yu, M.; Yu, X.; Yuan, J.; Yurkewicz, A.; Zaets, V. G.; Zaidan, R.; Zaitsev, A. M.; Zajac, J.; Zajacova, Z.; Zalite, A. Yu; Zalite, Yo K.; Zanello, L.; Zarzhitsky, P.; Zaytsev, A.; Zdrazil, M.; Zeitnitz, C.; Zeller, M.; Zema, P. F.; Zendler, C.; Zenin, A. V.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zhang, H.; Zhang, J.; Zheng, W.; Zhang, X.; Zhao, L.; Zhao, T.; Zhao, X.; Zhao, Z.; Zhelezko, A.; Zhemchugov, A.; Zheng, S.; Zhichao, L.; Zhou, B.; Zhou, N.; Zhou, S.; Zhou, Y.; Zhu, C. G.; Zhu, H. Z.; Zhuang, X. A.; Zhuravlov, V.; Zilka, B.; Zimin, N. I.; Zimmermann, S.; Ziolkowski, M.; Zitoun, R.; Zivkovic, L.; Zmouchko, V. V.; Zobernig, G.; Zoccoli, A.; Zoeller, M. M.; Zolnierowski, Y.; Zsenei, A.; zur Nedden, M.; Zychacek, V.

    2008-08-01

    The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.

  19. CERN IRRADIATION FACILITIES.

    PubMed

    Pozzi, Fabio; Garcia Alia, Ruben; Brugger, Markus; Carbonez, Pierre; Danzeca, Salvatore; Gkotse, Blerina; Richard Jaekel, Martin; Ravotti, Federico; Silari, Marco; Tali, Maris

    2017-09-28

    CERN provides unique irradiation facilities for applications in dosimetry, metrology, intercomparison of radiation protection devices, benchmark of Monte Carlo codes and radiation damage studies to electronics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Towards a 21st century telephone exchange at CERN

    NASA Astrophysics Data System (ADS)

    Valentín, F.; Hesnaux, A.; Sierra, R.; Chapron, F.

    2015-12-01

    The advent of mobile telephony and Voice over IP (VoIP) has significantly impacted the traditional telephone exchange industry—to such an extent that private branch exchanges are likely to disappear completely in the near future. For large organisations, such as CERN, it is important to be able to smooth this transition by implementing new multimedia platforms that can protect past investments and the flexibility needed to securely interconnect emerging VoIP solutions and forthcoming developments such as Voice over LTE (VoLTE). We present the results of ongoing studies and tests at CERN of the latest technologies in this area.

  1. Ageing Studies on the First Resistive-MicroMeGaS Quadruplet at GIF++ Preliminary Results

    NASA Astrophysics Data System (ADS)

    Alvarez Gonzalez, B.; Bianco, M.; Farina, E.; Iengo, P.; Kuger, F.; Lin, T.; Longo, L.; Sekhniaidze, G.; Sidiropoulou, O.; Schott, M.; Valderanis, C.; Wotschack, J.

    2018-02-01

    A resistive-MicroMeGaS quadruplet built at CERN has been installed at the new CERN Gamma Irradiation Facility (GIF++) with the aim of carrying out a long-term ageing study. Two smaller resistive bulk-MicroMeGaS produced at the CERN PCB workshop have also been installed at GIF++ in order to provide a comparison of the ageing behavior with the MicroMeGaS quadruplet. We give an overview of the ongoing tests at GIF++ in terms of particle rate, integrated charge and spatial resolution of the MicroMeGaS detectors.

  2. Media Training

    ScienceCinema

    None

    2017-12-09

    With the LHC starting up soon, the world's media are again turning their attention to CERN. We're all likely to be called upon to explain what is happening at CERN to media, friends and neighbours. The seminar will be given by BBC television news journalists Liz Pike and Nadia Marchant, and will deal with the kind of questions we're likely to be confronted with through the restart period. The training is open for everybody. Make sure you arrive early enough to get a seat - there are only 200 seats in the Globe. The session will also be webcast: http://webcast.cern.ch/

  3. The significance of Cern

    ScienceCinema

    None

    2017-12-09

    Le Prof. V.Weisskopf, DG du Cern de 1961 à 1965, est né à Vienne, a fait ses études à Göttingen et a une carrière académique particulièrement riche. Il a travaillé à Berlin, Copenhague et Berlin et est parti aux Etats Unis pour participer au projet Manhattan et était Prof. au MTT jusqu'à 1960. Revenu en Europe, il a été DG du Cern et lui a donné l'impulsion que l'on sait.

  4. HIGH ENERGY PHYSICS: CERN Link Breathes Life Into Russian Physics.

    PubMed

    Stone, R

    2000-10-13

    Without fanfare, 600 Russian scientists here at CERN, the European particle physics laboratory, are playing key roles in building the Large Hadron Collider (LHC), a machine that will explore fundamental questions such as why particles have mass, as well as search for exotic new particles whose existence would confirm supersymmetry, a popular theory that aims to unify the four forces of nature. In fact, even though Russia is not one of CERN's 20 member states, most top high-energy physicists in Russia are working on the LHC. Some say their work could prove the salvation of high-energy physics back home.

  5. Building an organic block storage service at CERN with Ceph

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel; Wiebalck, Arne

    2014-06-01

    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons.Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McAllister, Liam

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe";. The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions".This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde. Local organizers: A. Uranga, J. Walcher« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McAllister, Liam

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe";. The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental InteractionS". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Ashoke

    Part 7.The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five seriesmore » of pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions";. This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde. Local organizers: A. Uranga, J. Walcher« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Ashoke

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network". The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde Local organizers: A. Uranga, J. Walcher.« less

  14. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  15. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    NASA Astrophysics Data System (ADS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  16. The VISPA internet platform for outreach, education and scientific research in various experiments

    NASA Astrophysics Data System (ADS)

    van Asseldonk, D.; Erdmann, M.; Fischer, B.; Fischer, R.; Glaser, C.; Heidemann, F.; Müller, G.; Quast, T.; Rieger, M.; Urban, M.; Welling, C.

    2015-12-01

    VISPA provides a graphical front-end to computing infrastructures giving its users all functionality needed for working conditions comparable to a personal computer. It is a framework that can be extended with custom applications to support individual needs, e.g. graphical interfaces for experiment-specific software. By design, VISPA serves as a multipurpose platform for many disciplines and experiments as demonstrated in the following different use-cases. A GUI to the analysis framework OFFLINE of the Pierre Auger collaboration, submission and monitoring of computing jobs, university teaching of hundreds of students, and outreach activity, especially in CERN's open data initiative. Serving heterogeneous user groups and applications gave us lots of experience. This helps us in maturing the system, i.e. improving the robustness and responsiveness, and the interplay of the components. Among the lessons learned are the choice of a file system, the implementation of websockets, efficient load balancing, and the fine-tuning of existing technologies like the RPC over SSH. We present in detail the improved server setup and report on the performance, the user acceptance and the realized applications of the system.

  17. CLOUDCLOUD : general-purpose instrument monitoring and data managing software

    NASA Astrophysics Data System (ADS)

    Dias, António; Amorim, António; Tomé, António

    2016-04-01

    An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are monitored with parsing intervals as fast as milliseconds. This software (server+agents+interface+database) comes in easy and ready-to-use packages that can be installed in any operating system, including Android and iOS systems. This software is ideal for use in modular experiments or monitoring stations with large variability in instruments and measuring methods or in large collaborations, where data requires homogenization in order to be effectively transmitted to all involved parties. This work presents the software and provides performance comparison with previously used monitoring systems in the CLOUD experiment at CERN.

  18. CERN goes iconic

    NASA Astrophysics Data System (ADS)

    2017-06-01

    There are more than 1800 emoji that can be sent and received in text messages and e-mails. Now, the CERN particle-physics lab near Geneva has got in on the act and released its own collection of 35 images that can be used by anyone with an Apple device.

  19. Neutrino Factory Plans at CERN

    NASA Astrophysics Data System (ADS)

    Riche, J. A.

    2002-10-01

    The considerable interest raised by the discovery of neutrino oscillations and recent progress in studies of muon colliders has triggered interest in considering a neutrino factory at CERN. This paper explains the reference scenario, indicates the other possible choices and mentions the R&D that are foreseen.

  20. Wi-Fi Service enhancement at CERN

    NASA Astrophysics Data System (ADS)

    Ducret, V.; Sosnowski, A.; Gonzalez Caballero, B.; Barrand, Q.

    2017-10-01

    Since the early 2000’s, the number of mobile devices connected to CERN’s internal network has increased from just a handful to well over 10,000. Wireless access is no longer simply “nice to have” or just for conference and meeting rooms; support for mobility is expected by most, if not all, of the CERN community. In this context, a full renewal of the CERN Wi-Fi network has been launched to deliver a state-of-the-art campus-wide Wi-Fi Infrastructure. We aim to deliver, in more than 200 office buildings with a surface area of over 400,000m2 and including many high-priority and high-occupation zones, an end-user experience comparable, for most applications, to a wired connection and with seamless mobility support. We describe here the studies and tests performed at CERN to ensure the solution we are deploying can meet these goals as well as delivering a single, simple, flexible and open management platform.

  1. Thermostructural characterization and structural elastic property optimization of novel high luminosity LHC collimation materials at CERN

    NASA Astrophysics Data System (ADS)

    Borg, M.; Bertarelli, A.; Carra, F.; Gradassi, P.; Guardia-Valenzuela, J.; Guinchard, M.; Izquierdo, G. Arnau; Mollicone, P.; Sacristan-de-Frutos, O.; Sammut, N.

    2018-03-01

    The CERN Large Hadron Collider is currently being upgraded to operate at a stored beam energy of 680 MJ through the High Luminosity upgrade. The LHC performance is dependent on the functionality of beam collimation systems, essential for safe beam cleaning and machine protection. A dedicated beam experiment at the CERN High Radiation to Materials facility is created under the HRMT-23 experimental campaign. This experiment investigates the behavior of three collimation jaws having novel composite absorbers made of copper diamond, molybdenum carbide graphite, and carbon fiber carbon, experiencing accidental scenarios involving the direct beam impact on the material. Material characterization is imperative for the design, execution, and analysis of such experiments. This paper presents new data and analysis of the thermostructural characteristics of some of the absorber materials commissioned within CERN facilities. In turn, characterized elastic properties are optimized through the development and implementation of a mixed numerical-experimental optimization technique.

  2. Highlights from the CERN/ESO/NordForsk ''Gender in Physics Day''

    NASA Astrophysics Data System (ADS)

    Primas, F.; Guinot, G.; Strandberg, L.

    2017-03-01

    In their role as observers on the EU Gender Equality Network in the European Research Area (GENERA) project, funded under the Horizon 2020 framework, CERN, ESO and NordForsk joined forces and organised a Gender in Physics Day at the CERN Globe of Science and Innovation. The one-day conference aimed to examine innovative activities promoting gender equality, and to discuss gender-oriented policies and best practice in the European Research Area (with special emphasis on intergovernmental organisations), as well as the importance of building solid networks. The event was very well attended and was declared a success. The main highlights of the meeting are reported.

  3. Dissemination of data measured at the CERN n_TOF facility

    NASA Astrophysics Data System (ADS)

    Dupont, E.; Otuka, N.; Cabellos, O.; Aberle, O.; Aerts, G.; Altstadt, S.; Alvarez, H.; Alvarez-Velarde, F.; Andriamonje, S.; Andrzejewski, J.; Audouin, L.; Bacak, M.; Badurek, G.; Balibrea, J.; Barbagallo, M.; Barros, S.; Baumann, P.; Bécares, V.; Bečvář, F.; Beinrucker, C.; Belloni, F.; Berthier, B.; Berthoumieux, E.; Billowes, J.; Boccone, V.; Bosnar, D.; Brown, A.; Brugger, M.; Caamaño, M.; Calviani, M.; Calviño, F.; Cano-Ott, D.; Capote, R.; Cardella, R.; Carrapiço, C.; Casanovas, A.; Castelluccio, D. M.; Cennini, P.; Cerutti, F.; Chen, Y. H.; Chiaveri, E.; Chin, M.; Colonna, N.; Cortés, G.; Cortés-Giraldo, M. A.; Cosentino, L.; Couture, A.; Cox, J.; Damone, L. A.; David, S.; Deo, K.; Diakaki, M.; Dillmann, I.; Domingo-Pardo, C.; Dressler, R.; Dridi, W.; Duran, I.; Eleftheriadis, C.; Embid-Segura, M.; Fernández-Domínguez, B.; Ferrant, L.; Ferrari, A.; Ferreira, P.; Finocchiaro, P.; Fraval, K.; Frost, R. J. W.; Fujii, K.; Furman, W.; Ganesan, S.; Garcia, A. R.; Gawlik, A.; Gheorghe, I.; Gilardoni, S.; Giubrone, G.; Glodariu, T.; Göbel, K.; Gomez-Hornillos, M. B.; Goncalves, I. F.; Gonzalez-Romero, E.; Goverdovski, A.; Gramegna, F.; Griesmayer, E.; Guerrero, C.; Gunsing, F.; Gurusamy, P.; Haight, R.; Harada, H.; Heftrich, T.; Heil, M.; Heinitz, S.; Hernández-Prieto, A.; Heyse, J.; Igashira, M.; Isaev, S.; Jenkins, D. G.; Jericha, E.; Kadi, Y.; Kaeppeler, F.; Kalamara, A.; Karadimos, D.; Karamanis, D.; Katabuchi, T.; Kavrigin, P.; Kerveno, M.; Ketlerov, V.; Khryachkov, V.; Kimura, A.; Kivel, N.; Kokkoris, M.; Konovalov, V.; Krtička, M.; Kroll, J.; Kurtulgil, D.; Lampoudis, C.; Langer, C.; Leal-Cidoncha, E.; Lederer, C.; Leeb, H.; Naour, C. Le; Lerendegui-Marco, J.; Leong, L. S.; Licata, M.; Meo, S. Lo; Lonsdale, S. J.; Losito, R.; Lozano, M.; Macina, D.; Manousos, A.; Marganiec, J.; Martinez, T.; Marrone, S.; Masi, A.; Massimi, C.; Mastinu, P.; Mastromarco, M.; Matteucci, F.; Maugeri, E. A.; Mazzone, A.; Mendoza, E.; Mengoni, A.; Milazzo, P. M.; Mingrone, F.; Mirea, M.; Mondelaers, W.; Montesano, S.; Moreau, C.; Mosconi, M.; Musumarra, A.; Negret, A.; Nolte, R.; O'Brien, S.; Oprea, A.; Palomo-Pinto, F. R.; Pancin, J.; Paradela, C.; Patronis, N.; Pavlik, A.; Pavlopoulos, P.; Perkowski, J.; Perrot, L.; Pigni, M. T.; Plag, R.; Plompen, A.; Plukis, L.; Poch, A.; Porras, I.; Praena, J.; Pretel, C.; Quesada, J. M.; Radeck, D.; Rajeev, K.; Rauscher, T.; Reifarth, R.; Riego, A.; Robles, M.; Roman, F.; Rout, P. C.; Rudolf, G.; Rubbia, C.; Rullhusen, P.; Ryan, J. A.; Sabaté-Gilarte, M.; Salgado, J.; Santos, C.; Sarchiapone, L.; Sarmento, R.; Saxena, A.; Schillebeeckx, P.; Schmidt, S.; Schumann, D.; Sedyshev, P.; Smith, A. G.; Sosnin, N. V.; Stamatopoulos, A.; Stephan, C.; Suryanarayana, S. V.; Tagliente, G.; Tain, J. L.; Tarifeño-Saldivia, A.; Tarrío, D.; Tassan-Got, L.; Tavora, L.; Terlizzi, R.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Versaci, R.; Vermeulen, M. J.; Villamarin, D.; Vicente, M. C.; Vlachoudis, V.; Vlastou, R.; Voss, F.; Wallner, A.; Walter, S.; Ware, T.; Warren, S.; Weigand, M.; Weiß, C.; Wolf, C.; Wiesher, M.; Wisshak, K.; Woods, P. J.; Wright, T.; Žugec, P.

    2017-09-01

    The n_TOF neutron time-of-flight facility at CERN is used for high quality nuclear data measurements from thermal energy up to hundreds of MeV. In line with the CERN open data policy, the n_TOF Collaboration takes actions to preserve its unique data, facilitate access to them in standardised format, and allow their re-use by a wide community in the fields of nuclear physics, nuclear astrophysics and various nuclear technologies. The present contribution briefly describes the n_TOF outcomes, as well as the status of dissemination and preservation of n_TOF final data in the international EXFOR library.

  4. How to create successful Open Hardware projects — About White Rabbits and open fields

    NASA Astrophysics Data System (ADS)

    van der Bij, E.; Arruat, M.; Cattin, M.; Daniluk, G.; Gonzalez Cobas, J. D.; Gousiou, E.; Lewis, J.; Lipinski, M. M.; Serrano, J.; Stana, T.; Voumard, N.; Wlostowski, T.

    2013-12-01

    CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.

  5. Preparation of a primary argon beam for the CERN fixed target physics.

    PubMed

    Küchler, D; O'Neil, M; Scrivens, R; Thomae, R

    2014-02-01

    The fixed target experiment NA61 in the North Area of the Super Proton Synchrotron is studying phase transitions in strongly interacting matter. Up to now they used the primary beams available from the CERN accelerator complex (protons and lead ions) or fragmented beams created from the primary lead ion beam. To explore a wider range of energies and densities a request was made to provide primary argon and xenon beams. This paper describes the results of the setting up and 10 week test run of the Ar(11+) beam from the 14.5 GHz ECR ion source and the linear accelerator (Linac3) at CERN.

  6. The keys to CERN conference rooms - Managing local collaboration facilities in large organisations

    NASA Astrophysics Data System (ADS)

    Baron, T.; Domaracky, M.; Duran, G.; Fernandes, J.; Ferreira, P.; Gonzalez Lopez, J. B.; Jouberjean, F.; Lavrut, L.; Tarocco, N.

    2014-06-01

    For a long time HEP has been ahead of the curve in its usage of remote collaboration tools, like videoconference and webcast, while the local CERN collaboration facilities were somewhat behind the expected quality standards for various reasons. This time is now over with the creation by the CERN IT department in 2012 of an integrated conference room service which provides guidance and installation services for new rooms (either equipped for videoconference or not), as well as maintenance and local support. Managing now nearly half of the 246 meeting rooms available on the CERN sites, this service has been built to cope with the management of all CERN rooms with limited human resources. This has been made possible by the intensive use of professional software to manage and monitor all the room equipment, maintenance and activity. This paper focuses on presenting these packages, either off-the-shelf commercial products (asset and maintenance management tool, remote audio-visual equipment monitoring systems, local automation devices, new generation touch screen interfaces for interacting with the room) when available or locally developed integration and operational layers (generic audio-visual control and monitoring framework) and how they help overcoming the challenges presented by such a service. The aim is to minimise local human interventions while preserving the highest service quality and placing the end user back in the centre of this collaboration platform.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The CERN Winter School on Supergravity, Strings, and Gauge Theory is the analytic continuation of the yearly training school of the former EC-RTN string network "Constituents, Fundamental Forces and Symmetries of the Universe". The 2010 edition of the school is supported and organized by the CERN Theory Divison, and will take place from Monday January 25 to Friday January 29, at CERN. As its predecessors, this school is meant primarily for training of doctoral students and young postdoctoral researchers in recent developments in theoretical high-energy physics and string theory. The programme of the school will consist of five series ofmore » pedagogical lectures, complemented by tutorial discussion sessions in the afternoons. Previous schools in this series were organized in 2005 at SISSA in Trieste, and in 2006, 2007, 2008, and 2009 at CERN, Geneva. Other similar schools have been organized in the past by the former related RTN network "The Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions". This edition of the school is not funded by the European Union. The school is funded by the CERN Theory Division, and the Arnold Sommerfeld Center at Ludwig-Maximilians University of Munich. Scientific committee: M. Gaberdiel, D. Luest, A. Sevrin, J. Simon, K. Stelle, S. Theisen, A. Uranga, A. Van Proeyen, E. Verlinde. Local organizers: A. Uranga, J. Walcher. This video is Part 11 in the series.« less

  8. A possible biomedical facility at the European Organization for Nuclear Research (CERN).

    PubMed

    Dosanjh, M; Jones, B; Myers, S

    2013-05-01

    A well-attended meeting, called "Brainstorming discussion for a possible biomedical facility at CERN", was held by the European Organization for Nuclear Research (CERN) at the European Laboratory for Particle Physics on 25 June 2012. This was concerned with adapting an existing, but little used, 78-m circumference CERN synchrotron to deliver a wide range of ion species, preferably from protons to at least neon ions, with beam specifications that match existing clinical facilities. The potential extensive research portfolio discussed included beam ballistics in humanoid phantoms, advanced dosimetry, remote imaging techniques and technical developments in beam delivery, including gantry design. In addition, a modern laboratory for biomedical characterisation of these beams would allow important radiobiological studies, such as relative biological effectiveness, in a dedicated facility with standardisation of experimental conditions and biological end points. A control photon and electron beam would be required nearby for relative biological effectiveness comparisons. Research beam time availability would far exceed that at other facilities throughout the world. This would allow more rapid progress in several biomedical areas, such as in charged hadron therapy of cancer, radioisotope production and radioprotection. The ethos of CERN, in terms of open access, peer-reviewed projects and governance has been so successful for High Energy Physics that application of the same to biomedicine would attract high-quality research, with possible contributions from Europe and beyond, along with potential new funding streams.

  9. The transfer of land resources information into the public sector—The Texas experience

    NASA Astrophysics Data System (ADS)

    Wermund, E. G.

    1980-03-01

    Mapping of land resources and environmental geology was initiated in Texas toward better communication of geology to the public policy sector. Relevant mapping parameters have included terrain, substrate, active processes, economic resources, and hydrology as well as physical, chemical, and biologic properties. Land resources maps and reports have been prepared for public agencies and published for technical and nontechnical readers; sales of these articles are one indicator of public policy transfer. Single lectures or participation in symposia and colloquia for scientific societies have been valuable only for peer review or as a means to sharpen communicative skills. The most successful mechanisms of public policy transfer have been (1) in-state workshops and short courses for elected officials, Governmental employees, and interested citizens; (2) legislative testimonies; (3) active participation on interagency committees; (4) reviews and comments on planning statements; and (5) a temporary loan of personnel to another agency. Areas where these methods successfully have impacted public policy are reflected in the present quality of Section 208, Section 701, and coastal zone management planning; applications for surface-mining permits; and environmental impact statement records in Texas.

  10. COSPAR, a platform for international cooperation in space research

    NASA Astrophysics Data System (ADS)

    Fellous, Jean-Louis

    The Committee on Space Research (COSPAR) was founded by the International Council for Science (ICSU) in 1958, with the aim of favouring the dialogue between the USSR and USA in the time of the Cold War. Fifty-six years later, COSPAR is continuing its mission of service to the worldwide space research community. Thousands of scientists attend COSPAR assemblies, read and publish their results in its journals, participate in its workshops, colloquia and symposia, but many are unaware of the wealth of activities that COSPAR undertakes or supports. Many of them ignore the processes through which this organisation develops its activities, how it is structured, how to get involved in its governance, how to promote new initiatives with its help, etc. Young space scientists do not know the history of, and prominent roles played by COSPAR, past and present, and more importantly need to understand better the benefits that can be accrued from their involvement within COSPAR. This presentation will review these aspects and offer all interested scientists a detailed overview of COSPAR activities and plans for the future.

  11. Live Webcasts from CERN and ESO for European Science and Technology Week

    NASA Astrophysics Data System (ADS)

    2002-10-01

    Visit http://www.cern.ch/sci-tech on 7 - 8 November to find out what modern Europeans can't live without. Seven of Europe's leading Research Organizations [1] are presenting three live Webcasts from CERN in a joint outreach programme for the European Science and Technology Week . The aim of Sci-Tech... couldn't be without it! is to show how today's society couldn't be without cutting-edge scientific research. See also ESO Press Release 05/02. Northern Europeans can't imagine their households without ovens, whereas Southern Europeans identify the refrigerator as the most essential household appliance. In the area of communications, cars and motorbikes are clearly the technologies of choice in Italy, but are regarded as less important in countries like Norway and Germany. For entertainment, the personal computer is a clear winner as the device is considered most essential by all Europeans, followed by the TV and the Internet. This hit parade of technological marvels is the result of a phone and online survey conducted by the Sci-Tech... couldn't be without it! team for this year's European Science and Technology Week on 4-10 November. The technologies Europeans could not be without, form the starting point of three entertaining and informative Webcast shows in Italian (Thursday 7 November at 10:00 CET), French (Thursday 7 November at 15:00 CET) and English (Friday 8 November at 15:00 CET), broadcast live on the Internet from a studio at CERN. During these Webcasts scientists from the seven research Organizations and their industrial partners Sun Microsystems, Siemens, L'Oreal and Luminex will engage - from the CERN studio or from remote locations through teleconference links - an audience of Internauts all over the world. The public will be taken inside their most popular gadgets to discover the science that made them possible and how vital fundamental research has been in the creation of modern technology. Fundamental science will be brought as close as possible to people's daily lives by showing in an entertaining way how the behaviour of electrons in silicon was essential to the development of transistors and thus to computers, for example. How new medicines are developed by looking at the genome of malaria-carrying mosquitoes, and how cancer can be diagnosed and treated with particle beams. People will be amazed to discover how everyday products such as cosmetics are developed using advanced scientific instruments like synchrotron radiation sources. And how fashion and design will be soon revolutionised by a new fabric made of the same optical fibre used for advanced computer networks. The excitement of the Internet audience will be maintained thanks to live quiz shows for 15 to 19 year-old Europeans in the studio and online, with top-tech prizes to win. Sci-tech... couldn't be without it! will show the next generation of technology users how fundamental research is relevant to everyday life, and draw attention to the fascinating opportunities that lie ahead in the world of research and development. WATCH THE LIVE WEBCASTS and take part in the Online quizzes: * Thursday 7 November at 10:00 hrs CET in Italian * Thursday 7 November at 15:00 hrs CET in French * Friday 8 November at 15:00 hrs CET in English on http://www.cern.ch/sci-tech For more information on the webcasts and the Sci-tech... couldn't be without it! project, contact: paola.catapano@cern.ch Catch A Star! Go to the Catch a Star! educational programme Another interesting webcast can be followed on Friday, November 8, 2002, from 13:00 hrs CET (in English) at: http://www.eso.org/public/outreach/eduoff/cas/cas2002/cas-webcast.html Earlier this year, the European Southern Observatory (ESO) and the European Association for Astronomy Education (EAAE) invited all students in Europe's schools to the exciting Catch A Star! web-based educational programme with a competition. It takes place within the context of the EC-sponsored European Week of Science and Technology (EWST) - 2002 . See also ESO Press Release 08/02. This project revolves around a web-based competition and is centred on astronomy. It is specifically conceived to stimulate the interest of young people in various aspects of this well-known field of science, but will also be of interest to the broad public. Three hundred groups of up to four persons (e.g., three students and one teacher) have selected an astronomical object of their choice - a bright star, a distant galaxy, a beautiful comet, a planet or a moon in the solar system, or some other celestial body. They come from 25 countries. Until tomorrow, November 1, 2002, they have to deliver a comprehensive report about their chosen object. All reports have to conform with certain rules and are judged by a jury. Those fulfilling the criteria (explained at the Catch A Star! website) will participate in a lottery with exciting prizes, the first prize being a free trip in early 2003 for the members of the group to the ESO Paranal Observatory in Chile, the site of the ESO Very Large Telescope (VLT) . The lottery drawing will take place at the end of the European Week of Science and Technology, on November 8th, 2002, beginning at 13:00 hrs CET (12:00 UT) . This event will be broadcast by webcast and the outcome will be displayed via a dedicated webpage. All accepted reports (that fulfill the criteria) will be published on the Catch A Star! website soon thereafter.

  12. Grid Computing at GSI for ALICE and FAIR - present and future

    NASA Astrophysics Data System (ADS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-12-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  13. Mapping Remote and Multidisciplinary Learning Barriers: Lessons from "Challenge-Based Innovation" at CERN

    ERIC Educational Resources Information Center

    Jensen, Matilde Bisballe; Utriainen, Tuuli Maria; Steinert, Martin

    2018-01-01

    This paper presents the experienced difficulties of students participating in the multidisciplinary, remote collaborating engineering design course challenge-based innovation at CERN. This is with the aim to identify learning barriers and improve future learning experiences. We statistically analyse the rated differences between distinct design…

  14. DG's New Year's presentation

    ScienceCinema

    Heuer, R.-D.

    2018-05-22

    CERN general staff meeting. Looking back at key messages: Highest priority: LHC physics in 2009; Increase diversity of the scientific program; Prepare for future projects; Establish open and direct communication; Prepare CERN towards a global laboratory; Increase consolidation efforts; Financial situation--tight; Knowledge and technology transfer--proactive; Contract policy and internal mobility--lessons learned.

  15. Knowledge and Technology: Sharing With Society

    NASA Astrophysics Data System (ADS)

    Benvenuti, Cristoforo; Sutton, Christine; Wenninger, Horst

    The following sections are included: * A Core Mission of CERN * Medical Accelerators: A Tool for Tumour Therapy * Medipix: The Image is the Message * Crystal Clear: From Higgs to PET * Solar Collectors: When Nothing is Better * The TARC Experiment at CERN: Modern Alchemy * A CLOUD Chamber with a Silvery Lining * References

  16. Contextualized Magnetism in Secondary School: Learning from the LHC (CERN)

    ERIC Educational Resources Information Center

    Cid, Ramon

    2005-01-01

    Physics teachers in secondary schools usually mention the world's largest particle physics laboratory--CERN (European Organization for Nuclear Research)--only because of the enormous size of the accelerators and detectors used there, the number of scientists involved in their activities and also the necessary international scientific…

  17. WorldWide Web: Hypertext from CERN.

    ERIC Educational Resources Information Center

    Nickerson, Gord

    1992-01-01

    Discussion of software tools for accessing information on the Internet focuses on the WorldWideWeb (WWW) system, which was developed at the European Particle Physics Laboratory (CERN) in Switzerland to build a worldwide network of hypertext links using available networking technology. Its potential for use with multimedia documents is also…

  18. Applying physical science techniques and CERN technology to an unsolved problem in radiation treatment for cancer: the multidisciplinary ‘VoxTox’ research programme

    PubMed Central

    Burnet, Neil G; Scaife, Jessica E; Romanchikova, Marina; Thomas, Simon J; Bates, Amy M; Wong, Emma; Noble, David J; Shelley, Leila EA; Bond, Simon J; Forman, Julia R; Hoole, Andrew CF; Barnett, Gillian C; Brochu, Frederic M; Simmons, Michael PD; Jena, Raj; Harrison, Karl; Yeap, Ping Lin; Drew, Amelia; Silvester, Emma; Elwood, Patrick; Pullen, Hannah; Sultana, Andrew; Seah, Shannon YK; Wilson, Megan Z; Russell, Simon G; Benson, Richard J; Rimmer, Yvonne L; Jefferies, Sarah J; Taku, Nicolette; Gurnell, Mark; Powlson, Andrew S; Schönlieb, Carola-Bibiane; Cai, Xiaohao; Sutcliffe, Michael PF; Parker, Michael A

    2017-01-01

    The VoxTox research programme has applied expertise from the physical sciences to the problem of radiotherapy toxicity, bringing together expertise from engineering, mathematics, high energy physics (including the Large Hadron Collider), medical physics and radiation oncology. In our initial cohort of 109 men treated with curative radiotherapy for prostate cancer, daily image guidance computed tomography (CT) scans have been used to calculate delivered dose to the rectum, as distinct from planned dose, using an automated approach. Clinical toxicity data have been collected, allowing us to address the hypothesis that delivered dose provides a better predictor of toxicity than planned dose. PMID:29177202

  19. Do regions of ALICE matter? Social relationships and data exchanges in the Grid

    NASA Astrophysics Data System (ADS)

    Widmer, E. D.; Carminati, F.; Grigoras, C.; Viry, G.; Galli Carminati, G.

    2012-06-01

    Following a previous publication [1], this study aims at investigating the impact of regional affiliations of centres on the organisation of collaboration within the Distributed Computing ALICE infrastructure, based on social networks methods. A self-administered questionnaire was sent to all centre managers about support, email interactions and wished collaborations in the infrastructure. Several additional measures, stemming from technical observations were produced, such as bandwidth, data transfers and Internet Round Trip Time (RTT) were also included. Information for 50 centres were considered (60% response rate). Empirical analysis shows that despite the centralisation on CERN, the network is highly organised by regions. The results are discussed in the light of policy and efficiency issues.

  20. Do regions matter in ALICE?. Social relationships and data exchanges in the Grid

    NASA Astrophysics Data System (ADS)

    Widmer, E. D.; Viry, G.; Carminati, F.; Galli-Carminati, G.

    2012-02-01

    This study aims at investigating the impact of regional affiliations of centres on the organisation of collaborations within the Distributed Computing ALICE infrastructure, based on social networks methods. A self-administered questionnaire was sent to all centre managers about support, email interactions and wished collaborations in the infrastructure. Several additional measures, stemming from technical observations were collected, such as bandwidth, data transfers and Internet Round Trip Time (RTT) were also included. Information for 50 centres were considered (about 70% response rate). Empirical analysis shows that despite the centralisation on CERN, the network is highly organised by regions. The results are discussed in the light of policy and efficiency issues.

  1. Performance of the CMS Event Builder

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. F.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Zejdl, P.

    2017-10-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of {\\mathscr{O}}(100 {{GB}}/{{s}}) to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.

  2. Applying physical science techniques and CERN technology to an unsolved problem in radiation treatment for cancer: the multidisciplinary 'VoxTox' research programme.

    PubMed

    Burnet, Neil G; Scaife, Jessica E; Romanchikova, Marina; Thomas, Simon J; Bates, Amy M; Wong, Emma; Noble, David J; Shelley, Leila Ea; Bond, Simon J; Forman, Julia R; Hoole, Andrew Cf; Barnett, Gillian C; Brochu, Frederic M; Simmons, Michael Pd; Jena, Raj; Harrison, Karl; Yeap, Ping Lin; Drew, Amelia; Silvester, Emma; Elwood, Patrick; Pullen, Hannah; Sultana, Andrew; Seah, Shannon Yk; Wilson, Megan Z; Russell, Simon G; Benson, Richard J; Rimmer, Yvonne L; Jefferies, Sarah J; Taku, Nicolette; Gurnell, Mark; Powlson, Andrew S; Schönlieb, Carola-Bibiane; Cai, Xiaohao; Sutcliffe, Michael Pf; Parker, Michael A

    2017-06-01

    The VoxTox research programme has applied expertise from the physical sciences to the problem of radiotherapy toxicity, bringing together expertise from engineering, mathematics, high energy physics (including the Large Hadron Collider), medical physics and radiation oncology. In our initial cohort of 109 men treated with curative radiotherapy for prostate cancer, daily image guidance computed tomography (CT) scans have been used to calculate delivered dose to the rectum, as distinct from planned dose, using an automated approach. Clinical toxicity data have been collected, allowing us to address the hypothesis that delivered dose provides a better predictor of toxicity than planned dose.

  3. ALICE detector in construction phase

    NASA Astrophysics Data System (ADS)

    Peryt, Wiktor S.

    2005-09-01

    ALICE1 collaboration, which prepares one of the biggest physics experiments in the history, came into production phase of its detector. The experiment will start at LHC2 at CERN in 2007/2008. In the meantime about 1000 people from ~70 institutions are involved in this enterprise. ALICE detector consists of many sub-detectors, designed and manufactured in many laboratories and commercial firms, located mainly in Europe, but also in U.S., India, China and Korea. To assure appropriate working environment for such a specific task, strictly related to tests of particular components, measurements and assembly procedures Detector Construction Database system has been designed and implemented at CERN and at some labs involved in these activities. In this paper special attention is paid to this topic not only due to fact of innovative approach to the problem. Another reason is the group of young computer scientists (mainly students) from the Warsaw University of Technology, leaded by the author, has designed and developed the system for the whole experiment3. Another very interesting subject is the Data Acquisition System which has to fulfill very hard requirements concerning speed and high bandwidth. Required technical performance is achieved thanks to using PCI bus (usually in previous high energy physics experiments VME standard has been used) and optical links. Very general overview of the whole detector and physics goals of ALICE experiment will also be given.

  4. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  5. Preparation of a primary argon beam for the CERN fixed target physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Küchler, D., E-mail: detlef.kuchler@cern.ch; O’Neil, M.; Scrivens, R.

    2014-02-15

    The fixed target experiment NA61 in the North Area of the Super Proton Synchrotron is studying phase transitions in strongly interacting matter. Up to now they used the primary beams available from the CERN accelerator complex (protons and lead ions) or fragmented beams created from the primary lead ion beam. To explore a wider range of energies and densities a request was made to provide primary argon and xenon beams. This paper describes the results of the setting up and 10 week test run of the Ar{sup 11+} beam from the 14.5 GHz ECR ion source and the linear acceleratormore » (Linac3) at CERN.« less

  6. Open Media Training Session

    ScienceCinema

    None

    2017-12-09

    Have you ever wondered how the media work and why some topics make it into the news and other don't? Would you like to know how to (and how not to) give an interview to a journalist? With the LHC preparing for first collisions at high energies, the world's media are again turning their attention to CERN. We're all likely to be called upon to explain what is happening at CERN to media, friends and neighbours. The seminar will be given by BBC television news journalists Liz Pike and Nadia Marchant, and will deal with the kind of questions we're likely to be confronted with through the restart period. Follow the webcast: http://webcast.cern.ch/

  7. CERN - Six Decades of Science, Innovation, Cooperation, and Inspiration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quigg, Chris

    The European Laboratory for Particle Physics, which straddles the Swiss-French border northwest of Geneva, celebrates its sixtieth birthday in 2014 CERN is the preeminent particle-physics institution in the world, currently emphasizing the study of collisions of protons and heavy nuclei at very high energies and the exploration of physics on the electroweak scale (energies where electromagnetism and the weak nuclear force merge). With brilliant accomplishments in research, innovation, and education, and a sustained history of cooperation among people from different countries and cultures, CERN ranks as one of the signal achievements of the postwar European Project. For physicists the worldmore » over, the laboratory is a source of pride and inspiration.« less

  8. More "Hands-On" Particle Physics: Learning with ATLAS at CERN

    ERIC Educational Resources Information Center

    Long, Lynne

    2011-01-01

    This article introduces teachers and students to a new portal of resources called Learning with ATLAS at CERN (http://learningwithatlas-portal.eu/), which has been developed by a European consortium of academic researchers and schools' liaison and outreach providers from countries across Europe. It includes the use of some of the mind-boggling…

  9. History of Cern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2007-12-20

    Cérémonie à l'occasion de l'apparition du premier volume du livre sur l'histoire du Cern, avec plusieurs personnes présentes qui jouaient un rôle important dans cette organisation européenne couronnée de succès grâce à l'esprit des membres fondateurs qui est et restera essentiel

  10. Big data analytics as a service infrastructure: challenges, desired properties and solutions

    NASA Astrophysics Data System (ADS)

    Martín-Márquez, Manuel

    2015-12-01

    CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.

  11. Preface

    NASA Astrophysics Data System (ADS)

    Jakovics, A.

    2007-06-01

    The International Scientific Colloquium "Modelling for Material Processing" took place last year on June 8-9. It was the fourth time the colloquium was organized. The first colloquium took place in 1999. All colloquia were organized by the University of Latvia together with Leibniz University of Hannover (Germany) that signifies a long-term tradition (since 1988) of scientific cooperation between researchers of these two universities in the field of electrothermal process modelling. During the last colloquium scientific reports in the field of mathematical modelling in industrial electromagnetic applications for different materials (liquid metals, semiconductor technology, porous materials, melting of oxides and inductive heating) were presented. 70 researchers from 10 countries attended the colloquium. The contributions included about 30 oral presentations and 12 posters. The most illustrative presentations (oral and poster) in the field of MHD were selected for publication in a special issue of the international journal "Magnetohydrodynamics". Traditionally, many reports of the colloquium discuss the problems of MHD methods and devices applied to the metallurgical technologies and processes of semiconductor crystal growth. The new results illustrate the influence of combined electromagnetic fields on the hydrodynamics and heat/mass transfer in melts. The presented reports demonstrate that the models for simulation of turbulent liquid metal flows in melting furnaces, crystallization of alloys and single crystal growth in electromagnetic fields have become much more complex. The adequate description of occurring physical phenomena and the use of high performance computer and clusters allow to reduce the number of experiments in industrial facilities. The use of software and computers for modelling technological and environmental processes has a very long history at the University of Latvia. The first modelling activities in the field of industrial MHD applications had led to the establishment of the chair of Electrodynamics and Continuum Mechanics in 1970, the first head of which was professor Juris Mikelsons. In the early 90's, when all research institutions in our country underwent dramatic changes, not all research directions and institutions managed to adapt successfully to the new conditions. Fortunately, the people who were involved in computer modelling of physical processes were among the most successful. First, the existing and newly established contacts in Western Europe were used actively to reorient the applied researches in the directions actively studied at the universities and companies, which were the partners of the University of Latvia. As a result, research groups involved in these activities successfully joined the international effort related to the application of computer models to industrial processes, and the scientific laboratory for Mathematical Modelling of Environmental and Technological Processes was founded in 1994. The second direction of modelling development was related to the application of computer-based models for the environmental and technological processes (e.g., sediment transport in harbours, heat transfer in building constructions) that were important for the companies and state institutions in Latvia. Currently, the field of engineering physics, the core of which is the computer modelling of technological and environmental processes, is one of the largest and most successfully developing parts of researches and educational programs at the Department of Physics of the University of Latvia with very good perspectives in the future for the development of new technologies and knowledge transfer.

  12. TOWARDS A NOVEL MODULAR ARCHITECTURE FOR CERN RADIATION MONITORING.

    PubMed

    Boukabache, Hamza; Pangallo, Michel; Ducos, Gael; Cardines, Nicola; Bellotta, Antonio; Toner, Ciarán; Perrin, Daniel; Forkel-Wirth, Doris

    2017-04-01

    The European Organization for Nuclear Research (CERN) has the legal obligation to protect the public and the people working on its premises from any unjustified exposure to ionising radiation. In this context, radiation monitoring is one of the main concerns of the Radiation Protection Group. After 30 y of reliable service, the ARea CONtroller (ARCON) system is approaching the end of its lifecycle, which raises the need for new, more efficient radiation monitors with a high level of modularity to ensure better maintainability. Based on these two main principles, new detectors are currently being developed that will be capable of measuring very low dose rates down to 50 nSv h-1, whilst being able to measure radiation over an extensive range of 8 decades without any auto scaling. To reach these performances, CERN Radiation MOnitoring Electronics (CROME), the new generation of CERN radiation monitors, is based on the versatile architecture that includes new read-out electronics developed by the Instrumentation and Logistics section of the CERN Radiation Protection Group as well as a reconfigurable system on chip capable of performing complex processing calculations. Beside the capabilities of CROME to continuously measure the ambient dose rate, the system generates radiation alarms, provides interlock signals, drives alarm display units through a fieldbus and provides long-term, permanent and reliable data logging. The measurement tests performed during the first phase of the development show very promising results that pave the way to the second phase: the certification. © The Author 2016. Published by Oxford University Press.

  13. TOWARDS A NOVEL MODULAR ARCHITECTURE FOR CERN RADIATION MONITORING

    PubMed Central

    Boukabache, Hamza; Pangallo, Michel; Ducos, Gael; Cardines, Nicola; Bellotta, Antonio; Toner, Ciarán; Perrin, Daniel; Forkel-Wirth, Doris

    2017-01-01

    Abstract The European Organization for Nuclear Research (CERN) has the legal obligation to protect the public and the people working on its premises from any unjustified exposure to ionising radiation. In this context, radiation monitoring is one of the main concerns of the Radiation Protection Group. After 30 y of reliable service, the ARea CONtroller (ARCON) system is approaching the end of its lifecycle, which raises the need for new, more efficient radiation monitors with a high level of modularity to ensure better maintainability. Based on these two main principles, new detectors are currently being developed that will be capable of measuring very low dose rates down to 50 nSv h−1, whilst being able to measure radiation over an extensive range of 8 decades without any auto scaling. To reach these performances, CERN Radiation MOnitoring Electronics (CROME), the new generation of CERN radiation monitors, is based on the versatile architecture that includes new read-out electronics developed by the Instrumentation and Logistics section of the CERN Radiation Protection Group as well as a reconfigurable system on chip capable of performing complex processing calculations. Beside the capabilities of CROME to continuously measure the ambient dose rate, the system generates radiation alarms, provides interlock signals, drives alarm display units through a fieldbus and provides long-term, permanent and reliable data logging. The measurement tests performed during the first phase of the development show very promising results that pave the way to the second phase: the certification. PMID:27909154

  14. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-10-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.

  15. Got Questions About the Higgs Boson? Ask a Scientist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinchliffe, Ian

    Ask a scientist about the Higgs boson. There's a lot of buzz this week over new data from CERN's Large Hadron Collider (LHC) and the final data from Fermilab's Tevatron about the Higgs boson. It raises questions about what scientists have found and what still remains to be found -- and what it all means. Berkeley Lab's Ian Hinchliffe invites you to send in questions about the Higgs. He'll answer a few of your questions in a follow-up video later this week. Hinchliffe is a theoretical physicist who heads Berkeley Lab's sizable contingent with the ATLAS experiment at CERN. •more » Post your questions in the comment box • E-mail your questions to askascientist@lbl.gov • Tweet to @BerkeleyLab • Or post on our facebook page: facebook/berkeleylab Update on July 5: Ian responds to several of your questions in this video: http://youtu.be/1BkpD1IS62g. Update on 7/04: Here's CERN's press release from earlier today on the latest preliminary results in the search for the long sought Higgs particle: http://press.web.cern.ch/press/PressReleases/Releases2012/PR17.12E.htm. And here's a Q&A on what the news tells us: http://cdsweb.cern.ch/journal/CERNBulletin/2012/28/News%20Articles/1459460?ln=en. CERN will present the new LHC data at a seminar July 4th at 9:00 in the morning Geneva time (3:00 in the morning Eastern Daylight Time, midnight on the Pacific Coast), where the ATLAS collaboration and their rivals in the CMS experiment will announce their results. Tevatron results were announced by Fermilab on Monday morning. For more background on the LHC's search for the Higgs boson, visit http://newscenter.lbl.gov/feature-stories/2012/06/28/higgs-2012/.« less

  16. Got Questions About the Higgs Boson? Ask a Scientist

    ScienceCinema

    Hinchliffe, Ian

    2017-12-12

    Ask a scientist about the Higgs boson. There's a lot of buzz this week over new data from CERN's Large Hadron Collider (LHC) and the final data from Fermilab's Tevatron about the Higgs boson. It raises questions about what scientists have found and what still remains to be found -- and what it all means. Berkeley Lab's Ian Hinchliffe invites you to send in questions about the Higgs. He'll answer a few of your questions in a follow-up video later this week. Hinchliffe is a theoretical physicist who heads Berkeley Lab's sizable contingent with the ATLAS experiment at CERN. • Post your questions in the comment box • E-mail your questions to askascientist@lbl.gov • Tweet to @BerkeleyLab • Or post on our facebook page: facebook/berkeleylab Update on July 5: Ian responds to several of your questions in this video: http://youtu.be/1BkpD1IS62g. Update on 7/04: Here's CERN's press release from earlier today on the latest preliminary results in the search for the long sought Higgs particle: http://press.web.cern.ch/press/PressReleases/Releases2012/PR17.12E.htm. And here's a Q&A on what the news tells us: http://cdsweb.cern.ch/journal/CERNBulletin/2012/28/News%20Articles/1459460?ln=en. CERN will present the new LHC data at a seminar July 4th at 9:00 in the morning Geneva time (3:00 in the morning Eastern Daylight Time, midnight on the Pacific Coast), where the ATLAS collaboration and their rivals in the CMS experiment will announce their results. Tevatron results were announced by Fermilab on Monday morning. For more background on the LHC's search for the Higgs boson, visit http://newscenter.lbl.gov/feature-stories/2012/06/28/higgs-2012/.

  17. The diverse use of clouds by CMS

    DOE PAGES

    Andronis, Anastasios; Bauer, Daniela; Chaze, Olivier; ...

    2015-12-23

    The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the CMS Tier 1 sites, when it is not running as part of themore » trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources. Lastly, we present a snap shot of this infrastructure and its operation at the time of the CHEP2015 conference.« less

  18. Software and languages for microprocessors

    NASA Astrophysics Data System (ADS)

    Williams, David O.

    1986-08-01

    This paper forms the basis for lectures given at the 6th Summer School on Computing Techniques in Physics, organised by the Computational Physics group of the European Physics Society, and held at the Hotel Ski, Nové Město na Moravě, Czechoslovakia, on 17-26 September 1985. Various types of microprocessor applications are discussed and the main emphasis of the paper is devoted to 'embedded' systems, where the software development is not carried out on the target microprocessor. Some information is provided on the general characteristics of microprocessor hardware. Various types of microprocessor operating system are compared and contrasted. The selection of appropriate languages and software environments for use with microprocessors is discussed. Mechanisms for interworking between different languages, including reasonable error handling, are treated. The CERN developed cross-software suite for the Motorola 68000 family is described. Some remarks are made concerning program tools applicable to microprocessors. PILS, a Portable Interactive Language System, which can be interpreted or compiled for a range of microprocessors, is described in some detail, and the implementation techniques are discussed.

  19. Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].

    PubMed

    Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T

    2017-06-23

    We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.

  20. Data Mining as a Service (DMaaS)

    NASA Astrophysics Data System (ADS)

    Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.

    2016-10-01

    Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.

  1. Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯

    NASA Astrophysics Data System (ADS)

    Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration

    2017-06-01

    We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.

  2. GLISSANDO: GLauber Initial-State Simulation AND mOre…

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Rybczyński, Maciej; Bożek, Piotr

    2009-01-01

    We present a Monte Carlo generator for a variety of Glauber-like models (the wounded-nucleon model, binary collisions model, mixed model, model with hot spots). These models describe the early stages of relativistic heavy-ion collisions, in particular the spatial distribution of the transverse energy deposition which ultimately leads to production of particles from the interaction region. The original geometric distribution of sources in the transverse plane can be superimposed with a statistical distribution simulating the dispersion in the generated transverse energy in each individual collision. The program generates inter alia the fixed-axes (standard) and variable-axes (participant) two-dimensional profiles of the density of sources in the transverse plane and their azimuthal Fourier components. These profiles can be used in further analysis of physical phenomena, such as the jet quenching, event-by-event hydrodynamics, or analysis of the elliptic flow and its fluctuations. Characteristics of the event (multiplicities, eccentricities, Fourier coefficients, etc.) are stored in a ROOT file and can be analyzed off-line. In particular, event-by-event studies can be carried out in a simple way. A number of ROOT scripts is provided for that purpose. Supplied variants of the code can also be used for the proton-nucleus and deuteron-nucleus collisions. Program summaryProgram title: GLISSANDO Catalogue identifier: AEBS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4452 No. of bytes in distributed program, including test data, etc.: 34 766 Distribution format: tar.gz Programming language: C++ Computer: any computer with a C++ compiler and the ROOT environment [R. Brun, et al., Root Users Guide 5.16, CERN, 2007, http://root.cern.ch[1

  3. Of people, particles and prejudice

    NASA Astrophysics Data System (ADS)

    Jackson, Penny; Greene, Anne; Mears, Matt; Spacecadet1; Green, Christian; Hunt, Devin J.; Berglyd Olsen, Veronica K.; Ilya, Komarov; Pierpont, Elaine; Gillman, Matthew

    2016-05-01

    In reply to Louise Mayor's feature article “Where people and particles collide”, about the experiences of researchers at CERN who are lesbian, gay, bisexual or transgender (LGBT), efforts to make LGBT CERN an officially recognized club, and incidents where posters advertising the club have been torn down or defaced (March pp31-36, http://ow.ly/YVP2Z).

  4. The Secret Chambers in the Chephren Pyramid

    ERIC Educational Resources Information Center

    Gutowski, Bartosz; Józwiak, Witold; Joos, Markus; Kempa, Janusz; Komorowska, Kamila; Krakowski, Kamil; Pijus, Ewa; Szymczak, Kamil; Trojanowska, Malgorzata

    2018-01-01

    In 2016, we (seven high school students from a school in Plock, Poland) participated in the CERN Beamline for Schools competition. Together with our team coach, Mr. Janusz Kempa, we submitted a proposal to CERN that was selected as one of two winning proposals that year. This paper describes our experiment from the early days of brainstorming to…

  5. Lead Ions and Coulomb's Law at the LHC (CERN)

    ERIC Educational Resources Information Center

    Cid-Vidal, Xabier; Cid, Ramon

    2018-01-01

    Although for most of the time the Large Hadron Collider (LHC) at CERN collides protons, for around one month every year lead ions are collided, to expand the diversity of the LHC research programme. Furthermore, in an effort not originally foreseen, proton-lead collisions are also taking place, with results of high interest to the physics…

  6. From strangeness enhancement to quark-gluon plasma discovery

    NASA Astrophysics Data System (ADS)

    Koch, Peter; Müller, Berndt; Rafelski, Johann

    2017-11-01

    This is a short survey of signatures and characteristics of the quark-gluon plasma in the light of experimental results that have been obtained over the past three decades. In particular, we present an in-depth discussion of the strangeness observable, including a chronology of the experimental effort to detect QGP at CERN-SPS, BNL-RHIC, and CERN-LHC.

  7. Ceremony 25th birthday Cern

    ScienceCinema

    None

    2018-05-18

    Celebration of CERN's 25th birthday with a speech by L. Van Hove and J.B. Adams, musical interludes by Ms. Mey and her colleagues (starting with Beethoven). The general managers then proceed with the presentation of souvenirs to members of the personnel who have 25 years of service in the organization. A gesture of recognition is also given to Zwerner.

  8. Comittees

    NASA Astrophysics Data System (ADS)

    2004-10-01

    Fritz Caspers (CERN, Switzerland), Michel Chanel (CERN, Switzerland), Håkan Danared (MSL, Sweden), Bernhard Franzke (GSI, Germany), Manfred Grieser (MPI für Kernphysik, Germany), Dieter Habs (LMU München, Germany), Jeffrey Hangst (University of Aarhus, Denmark), Takeshi Katayama (RIKEN/Univ. Tokyo, Japan), H.-Jürgen Kluge (GSI, Germany), Shyh-Yuan Lee (Indiana University, USA), Rudolf Maier (FZ Jülich, Germany), John Marriner (FNAL, USA), Igor Meshkov (JINR, Russia), Dieter Möhl (CERN, Switzerland), Vasily Parkhomchuk (BINP, Russia), Robert Pollock (Indiana University), Dieter Prasuhn (FZ Jülich, Germany), Dag Reistad (TSL, Sweden), John Schiffer (ANL, USA), Andrew Sessler (LBNL, USA), Alexander Skrinsky (BINP, Russia), Markus Steck (GSI, Germany), Jie Wei (BNL, USA), Andreas Wolf (MPI für Kernphysik, Germany), Hongwei Zhao (IMP, People's Rep. of China).

  9. Across Europe to CERN: Taking students on the ultimate physics experience

    NASA Astrophysics Data System (ADS)

    Wheeler, Sam

    2018-05-01

    In 2013, I was an Einstein Fellow with the U.S. Department of Energy and I was asked by a colleague, working in a senator's office, if I would join him in a meeting with a physicist to "translate" the science into something more understandable. That meeting turned out to be a wonderful opportunity I would never have otherwise had. During the meeting I met Michael Tuts, a physicist who was working on project ATLAS at CERN. Afterwards, I walked with him out of the Senate office building to Union Station and, in parting, he gave me his card and told me that if I were in Geneva that he could help me get a tour of CERN and the LHC.

  10. User and group storage management the CMS CERN T2 centre

    NASA Astrophysics Data System (ADS)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  11. [CERN-MEDICIS (Medical Isotopes Collected from ISOLDE): a new facility].

    PubMed

    Viertl, David; Buchegger, Franz; Prior, John O; Forni, Michel; Morel, Philippe; Ratib, Osman; Bühler Léo H; Stora, Thierry

    2015-06-17

    CERN-MEDICIS is a facility dedicated to research and development in life science and medical applications. The research platform was inaugurated in October 2014 and will produce an increasing range of innovative isotopes using the proton beam of ISOLDE for fundamental studies in cancer research, for new imaging and therapy protocols in cell and animal models and for preclinical trials, possibly extended to specific early phase clinical studies (phase 0) up to phase I trials. CERN, the University Hospital of Geneva (HUG), the University Hospital of Lausanne (CHUV), the Swiss Institute for Experimental Cancer (ISREC) at Swiss Federal Institutes of Technology (EPFL) that currently support the project will benefit of the initial production that will then be extended to other centers.

  12. Estimation of land photosynthetically active radiation in clear sky using MODIS atmosphere and land products

    NASA Astrophysics Data System (ADS)

    Xie, Xiaoping; Gao, Wei; Gao, Zhiqiang

    2008-08-01

    Photosynthetically active radiation (PAR) is an essential parameter in vegetation growth model and soil carbon sequestration models. A method is presented with which instantaneous PAR can be calculated with high accuracy from Moderate Resolution Imaging Spectroradiometer (MODIS) atmosphere and land products. The method is based on a simplification of the general radiative transfer equation, which considers five major processes of attenuation of solar radiation: Rayleigh scattering, absorption by ozone and water vapor, aerosol scattering, multiply reflectance between surface and atmosphere. Comparing 108 retrieveled results to filed measured PAR in Yucheng station of Chinese Ecosystem Research Network (CERN) in 2006, and the r-square of 0.855 indicates that the computed results can interpret actual PAR well.

  13. LHCNet: Wide Area Networking and Collaborative Systems for HEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, H.B,

    2007-08-20

    This proposal presents the status and progress in 2006-7, and the technical and financial plans for 2008-2010 for the US LHCNet transatlantic network supporting U.S. participation in the LHC physics program. US LHCNet provides transatlantic connections of the Tier1 computing facilities at Fermilab and Brookhaven with the Tier0 and Tier1 facilities at CERN as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, the GEANT pan-European network, and NSF’s UltraLight project, US LHCNet also supports connections between the Tier2 centers (where most of the analysis of the data will take place, starting this year) and the Tier1smore » as needed.See report« less

  14. Asymmetric B-factory note

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, M.

    Three main issues giving purpose to our visit to CERN, ESRF and DESY were to: assess the current thinking at CERN on whether Eta, the gas desorption coefficient, would continue to decrease with continued with continued beam cleaning, determine if the time between NEG reconditioning could be expanded, and acquire a knowledge of the basic fabrication processes and techniques for producing beam vacuum chambers of copper.

  15. The Proton Synchrotron (PS): At the Core of the CERN Accelerators

    NASA Astrophysics Data System (ADS)

    Cundy, Donald; Gilardoni, Simone

    The following sections are included: * Introduction * Extraction: Getting the Beam to Leave the Accelerator * Acceleration and Bunch Gymnastics * Boosting PS Beam Intensity * Capacitive Energy Storage Replaces Flywheel * Taking the Neutrinos by the Horns * OMEGA: Towards the Electronic Bubble Chamber * ISOLDE: Targeting a New Era in Nuclear Physics * The CERN n_TOF Facility: Catching Neutrons on the Fly * References

  16. Taking Energy to the Physics Classroom from the Large Hadron Collider at CERN

    ERIC Educational Resources Information Center

    Cid, Xabier; Cid, Ramon

    2009-01-01

    In 2008, the greatest experiment in history began. When in full operation, the Large Hadron Collider (LHC) at CERN will generate the greatest amount of information that has ever been produced in an experiment before. It will also reveal some of the most fundamental secrets of nature. Despite the enormous amount of information available on this…

  17. The Higgs Boson: Is the End in Sight?

    ERIC Educational Resources Information Center

    Lincoln, Don

    2012-01-01

    This summer, perhaps while you were lounging around the pool in the blistering heat, the blogosphere was buzzing about data taken at the Large Hadron Collider at CERN. The buzz reached a crescendo in the first week of July when both Fermilab and CERN announced the results of their searches for the Higgs boson. Hard data confronted a theory nearly…

  18. The kaon identification system in the NA62 experiment at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, A.

    2015-07-01

    The main goal of the NA62 experiment at CERN is to measure the branching ratio of the ultra-rare K{sup +} → π{sup +} ν ν-bar decay with 10% accuracy. NA62 will use a 750 MHz high-energy un-separated charged hadron beam, with kaons corresponding to ∼6% of the beam, and a kaon decay-in-flight technique. The positive identification of kaons is performed with a differential Cherenkov detector (CEDAR), filled with Nitrogen gas and placed in the incoming beam. To stand the kaon rate (45 MHz average) and meet the performances required in NA62, the Cherenkov detector has been upgraded (KTAG) with newmore » photon detectors, readout, mechanics and cooling systems. The KTAG provides a fast identification of kaons with an efficiency of at least 95% and precise time information with a resolution below 100 ps. A half-equipped KTAG detector has been commissioned during a technical run at CERN in 2012, while the fully equipped detector, its readout and front-end have been commissioned during a pilot run at CERN in October 2014. The measured time resolution and efficiency are within the required performances. (authors)« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This year's edition of the annual Cosmo International Conference on Particle Physics and Cosmology -- Cosmo09 -- will be hosted by the CERN Theory Group from Monday September 7 till Friday September 11, 2009. The conference will take place at CERN, Geneva (Switzerland). The Cosmo series is one of the major venues of interaction between cosmologists and particle physicists. In the exciting LHC era, the Conference will be devoted to the modern interfaces between Fundamental and Phenomenological Particle Physics and Physical Cosmology and Astronomy. The Conference will be followed by the CERN TH Institute Particle Cosmology which will take placemore » from Monday September 14 till Friday September 18, 2009. The CERN-TH Institutes are visitor programs intended to bring together scientists with similar interests and to promote scientific collaborations. If you wish to participate, please register on the Institute web page. Link to last editions: COSMO 07 (U. of Sussex), COSMO 08 (U. of Wisconsin) List of plenary speakers: Gianfranco Bertone, Pierre Binetruy, Francois Bouchet, Juerg Diemand, Jonathan Feng, Gregory Gabadadze, Francis Halzen, Steen Hannestad, Will Kinney, Johannes Knapp, Hiranya Peiris, Will Percival, Syksy Rasanen, Alexandre Refregier, Pierre Salati, Roman Scoccimarro, Michael Schubnell, Christian Spiering, Neil Spooner, Andrew Tolley, Matteo Viel. The plenary program is available on-line.« less

  20. Lecture archiving on a larger scale at the University of Michigan and CERN

    NASA Astrophysics Data System (ADS)

    Herr, Jeremy; Lougheed, Robert; Neal, Homer A.

    2010-04-01

    The ATLAS Collaboratory Project at the University of Michigan has been a leader in the area of collaborative tools since 1999. Its activities include the development of standards, software and hardware tools for lecture archiving, and making recommendations for videoconferencing and remote teaching facilities. Starting in 2006 our group became involved in classroom recordings, and in early 2008 we spawned CARMA, a University-wide recording service. This service uses a new portable recording system that we developed. Capture, archiving and dissemination of rich multimedia content from lectures, tutorials and classes are increasingly widespread activities among universities and research institutes. A growing array of related commercial and open source technologies is becoming available, with several new products introduced in the last couple years. As the result of a new close partnership between U-M and CERN IT, a market survey of these products was conducted and a summary of the results are presented here. It is informing an ambitious effort in 2009 to equip many CERN rooms with automated lecture archiving systems, on a much larger scale than before. This new technology is being integrated with CERN's existing webcast, CDS, and Indico applications.

  1. COSMO 09

    ScienceCinema

    None

    2018-02-13

    This year's edition of the annual Cosmo International Conference on Particle Physics and Cosmology -- Cosmo09 -- will be hosted by the CERN Theory Group from Monday September 7 till Friday September 11, 2009. The conference will take place at CERN, Geneva (Switzerland). The Cosmo series is one of the major venues of interaction between cosmologists and particle physicists. In the exciting LHC era, the Conference will be devoted to the modern interfaces between Fundamental and Phenomenological Particle Physics and Physical Cosmology and Astronomy. The Conference will be followed by the CERN TH Institute Particle Cosmology which will take place from Monday September 14 till Friday September 18, 2009. The CERN-TH Institutes are visitor programs intended to bring together scientists with similar interests and to promote scientific collaborations. If you wish to participate, please register on the Institute web page. Link to last editions: COSMO 07 (U. of Sussex), COSMO 08 (U. of Wisconsin) List of plenary speakers: Gianfranco Bertone, Pierre Binetruy, Francois Bouchet, Juerg Diemand, Jonathan Feng, Gregory Gabadadze, Francis Halzen, Steen Hannestad, Will Kinney, Johannes Knapp, Hiranya Peiris, Will Percival, Syksy Rasanen, Alexandre Refregier, Pierre Salati, Roman Scoccimarro, Michael Schubnell, Christian Spiering, Neil Spooner, Andrew Tolley, Matteo Viel. The plenary program is available on-line.

  2. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  3. 65th birthday Jack Steinberger

    ScienceCinema

    None

    2017-12-09

    Laudatio pour Jack Steinberger né le 25 mai 1921, à l'occasion de son 65me anniversaire et sa retraite officielle, pour sa précieuse collaboration au Cern. Néanmoins son principal activité continuera comme avant dans sa recherche au Cern. Plusieurs orateurs prennent la parole (p.ex. E.Picasso) pour le féliciter et lui rendre hommage

  4. History of Cern

    ScienceCinema

    None

    2017-12-09

    Cérémonie à l'occasion de l'apparition du premier volume du livre sur l'histoire du Cern, avec plusieurs personnes présentes qui jouaient un rôle important dans cette organisation européenne couronnée de succès grâce à l'esprit des membres fondateurs qui est et restera essentiel

  5. Investigating the Inverse Square Law with the Timepix Hybrid Silicon Pixel Detector: A CERN [at] School Demonstration Experiment

    ERIC Educational Resources Information Center

    Whyntie, T.; Parker, B.

    2013-01-01

    The Timepix hybrid silicon pixel detector has been used to investigate the inverse square law of radiation from a point source as a demonstration of the CERN [at] school detector kit capabilities. The experiment described uses a Timepix detector to detect the gamma rays emitted by an [superscript 241]Am radioactive source at a number of different…

  6. DIRAC in Large Particle Physics Experiments

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Tsaregorodtsev, A.; Arrabito, L.; Sailer, A.; Hara, T.; Zhang, X.; Consortium, DIRAC

    2017-10-01

    The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. A number of High Energy Physics and Astrophysics collaborations have adopted DIRAC as the base for their computing models. DIRAC was initially developed for the LHCb experiment at LHC, CERN. Later, the Belle II, BES III and CTA experiments as well as the linear collider detector collaborations started using DIRAC for their computing systems. Some of the experiments built their DIRAC-based systems from scratch, others migrated from previous solutions, ad-hoc or based on different middlewares. Adaptation of DIRAC for a particular experiment was enabled through the creation of extensions to meet their specific requirements. Each experiment has a heterogeneous set of computing and storage resources at their disposal that were aggregated through DIRAC into a coherent pool. Users from different experiments can interact with the system in different ways depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. In this contribution we will summarize the experience of using DIRAC in particle physics collaborations. The problems of migration to DIRAC from previous systems and their solutions will be presented. An overview of specific DIRAC extensions will be given. We hope that this review will be useful for experiments considering an update, or for those designing their computing models.

  7. Radiation protection challenges in the management of radioactive waste from high-energy accelerators.

    PubMed

    Ulrici, Luisa; Algoet, Yvon; Bruno, Luca; Magistris, Matteo

    2015-04-01

    The European Laboratory for Particle Physics (CERN) has operated high-energy accelerators for fundamental physics research for nearly 60 y. The side-product of this activity is the radioactive waste, which is mainly generated as a result of preventive and corrective maintenance, upgrading activities and the dismantling of experiments or accelerator facilities. Prior to treatment and disposal, it is common practice to temporarily store radioactive waste on CERN's premises and it is a legal requirement that these storage facilities are safe and secure. Waste treatment typically includes sorting, segregation, volume and size reduction and packaging, which will depend on the type of component, its chemical composition, residual activity and possible surface contamination. At CERN, these activities are performed in a dedicated waste treatment centre under the supervision of the Radiation Protection Group. This paper gives an overview of the radiation protection challenges in the conception of a temporary storage and treatment centre for radioactive waste in an accelerator facility, based on the experience gained at CERN. The CERN approach consists of the classification of waste items into 'families' with similar radiological and physical-chemical properties. This classification allows the use of specific, family-dependent techniques for radiological characterisation and treatment, which are simultaneously efficient and compliant with best practices in radiation protection. The storage was planned on the basis of radiological and other possible hazards such as toxicity, pollution and fire load. Examples are given of technical choices for the treatment and radiological characterisation of selected waste families, which could be of interest to other accelerator facilities. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  9. Air liquide 1.8 K refrigeration units for CERN LHC project

    NASA Astrophysics Data System (ADS)

    Hilbert, Benoît; Gistau-Baguer, Guy M.; Caillaud, Aurélie

    2002-05-01

    The Large Hadron Collider (LHC) will be CERN's next research instrument for high energy physics. This 27 km long circular accelerator will make intensive use of superconducting magnets, operated below 2.0 K. It will thus require high capacity refrigeration below 2.0 K [1, 2]. Coupled to a refrigerator providing 18 kW equivalent at 4.5 K [3], these systems will be able to absorb a cryogenic power of 2.4 kW at 1.8 K in nominal conditions. Air Liquide has designed one Cold Compressor System (CCS) pre-series for CERN-preceding 3 more of them (among 8 in total located around the machine). These systems, making use of cryogenic centrifugal compressors in a series arrangement coupled to room temperature screw compressors, are presented. Key components characteristics will be given.

  10. Upgrade of the cryogenic CERN RF test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirotte, O.; Benda, V.; Brunner, O.

    2014-01-29

    With the large number of superconducting radiofrequency (RF) cryomodules to be tested for the former LEP and the present LHC accelerator a RF test facility was erected early in the 1990’s in the largest cryogenic test facility at CERN located at Point 18. This facility consisted of four vertical test stands for single cavities and originally one and then two horizontal test benches for RF cryomodules operating at 4.5 K in saturated helium. CERN is presently working on the upgrade of its accelerator infrastructure, which requires new superconducting cavities operating below 2 K in saturated superfluid helium. Consequently, the RFmore » test facility has been renewed in order to allow efficient cavity and cryomodule tests in superfluid helium and to improve its thermal performances. The new RF test facility is described and its performances are presented.« less

  11. Wolfgang Kummer at CERN

    NASA Astrophysics Data System (ADS)

    Schopper, Herwig

    Wolfgang Kummer was not only a great theorist but also a man with a noble spirit and extensive education, based on a fascinating long-term Austrian cultural tradition. As an experimentalist I am not sufficiently knowledgeable to evaluate his contributions to theoretical physics - this will certainly be done by more competent scientists. Nevertheless I admired him for not only being attached to fundamental and abstract problems like quantum field theory, quantum gravity or black holes, but for his interest in down to earth questions like electron-proton scattering or the toponium mass. I got to know Wolfgang Kummer very well and appreciate his human qualities during his long attachment to CERN, in particular when he served as president of the CERN Council, the highest decision taking authority of this international research centre, from 1985 to 1987 falling into my term as Director-General…

  12. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  13. The beam test of muon detector parameters for the SHiP experiment at CERN

    NASA Astrophysics Data System (ADS)

    Likhacheva, V. L.; Kudenko, Yu. G.; Mefodiev, A. V.; Mineev, O. V.; Khotyantsev, A. N.

    2018-01-01

    Scintillation detectors based on extruded plastics have been tested in a 10 GeV/c beam at CERN. The scintillation signal readout was provided using optical wavelength shifting fibers Y11 Kuraray and Hamamatsu MPPC micropixel avalanche photodiodes. The light yield was scanned along and across the detectors. Time resolution was found by fitting the MPPC digitized pulse rise and other methods.

  14. Determining the structure of Higgs couplings at the CERN LargeHadron Collider.

    PubMed

    Plehn, Tilman; Rainwater, David; Zeppenfeld, Dieter

    2002-02-04

    Higgs boson production via weak boson fusion at the CERN Large Hadron Collider has the capability to determine the dominant CP nature of a Higgs boson, via the tensor structure of its coupling to weak bosons. This information is contained in the azimuthal angle distribution of the two outgoing forward tagging jets. The technique is independent of both the Higgs boson mass and the observed decay channel.

  15. Commissioning results of CERN HIE-ISOLDE and INFN ALPI cryogenic control systems

    NASA Astrophysics Data System (ADS)

    Inglese, V.; Pezzetti, M.; Calore, A.; Modanese, P.; Pengo, R.

    2017-02-01

    The cryogenic systems of both accelerators, namely HIE ISOLDE (High Intensity and Energy Isotope Separator On Line DEvice) at CERN and ALPI (Acceleratore Lineare Per Ioni) at LNL, have been refurbished. HIE ISOLDE is a major upgrade of the existing ISOLDE facilities, which required the construction of a superconducting linear accelerator consisting of six cryomodules, each containing five superconductive RF cavities and superconducting solenoids. The ALPI linear accelerator, similar to HIE ISOLDE, is located at Legnaro National Laboratories (LNL) and became operational in the early 90’s. It is composed of 74 superconducting RF cavities, assembled inside 22 cryostats. The new control systems are equipped with PLC, developed on the CERN UNICOS framework, which include Schneider and Siemens PLCs and various fieldbuses (Profibus DP and PA, WorldFIP). The control systems were developed in synergy between CERN and LNL in order to build, effectively and with an optimized use of resources, control systems allowing to enhance ease of operation, maintainability, and long-term availability. This paper describes (i) the cryogenic systems, with special focus on the design of the control systems hardware and software, (ii) the strategy adopted in order to achieve a synergic approach, and (iii) the commissioning results after the cool-down to 4.5 K of the cryomodules.

  16. Oklahoma Center for High Energy Physics (OCHEP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, S; Strauss, M J; Snow, J

    2012-02-29

    The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less

  17. Integration of cloud-based storage in BES III computing environment

    NASA Astrophysics Data System (ADS)

    Wang, L.; Hernandez, F.; Deng, Z.

    2014-06-01

    We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.

  18. Controlling front-end electronics boards using commercial solutions

    NASA Astrophysics Data System (ADS)

    Beneyton, R.; Gaspar, C.; Jost, B.; Schmeling, S.

    2002-04-01

    LHCb is a dedicated B-physics experiment under construction at CERN's large hadron collider (LHC) accelerator. This paper will describe the novel approach LHCb is taking toward controlling and monitoring of electronics boards. Instead of using the bus in a crate to exercise control over the boards, we use credit-card sized personal computers (CCPCs) connected via Ethernet to cheap control PCs. The CCPCs will provide a simple parallel, I2C, and JTAG buses toward the electronics board. Each board will be equipped with a CCPC and, hence, will be completely independently controlled. The advantages of this scheme versus the traditional bus-based scheme will be described. Also, the integration of the controls of the electronics boards into a commercial supervisory control and data acquisition (SCADA) system will be shown.

  19. Lattice QCD at finite temperature and density from Taylor expansion

    NASA Astrophysics Data System (ADS)

    Steinbrecher, Patrick

    2017-01-01

    In the first part, I present an overview of recent Lattice QCD simulations at finite temperature and density. In particular, we discuss fluctuations of conserved charges: baryon number, electric charge and strangeness. These can be obtained from Taylor expanding the QCD pressure as a function of corresponding chemical potentials. Our simulations were performed using quark masses corresponding to physical pion mass of about 140 MeV and allow a direct comparison to experimental data from ultra-relativistic heavy ion beams at hadron colliders such as the Relativistic Heavy Ion Collider at Brookhaven National Laboratory and the Large Hadron Collider at CERN. In the second part, we discuss computational challenges for current and future exascale Lattice simulations with a focus on new silicon developments from Intel and NVIDIA.

  20. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  1. Theoretical and Computational Investigation of High-Brightness Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chiping

    Theoretical and computational investigations of adiabatic thermal beams have been carried out in parameter regimes relevant to the development of advanced high-brightness, high-power accelerators for high-energy physics research and for various applications such as light sources. Most accelerator applications require high-brightness beams. This is true for high-energy accelerators such as linear colliders. It is also true for energy recovery linacs (ERLs) and free electron lasers (FELs) such as x-ray free electron lasers (XFELs). The breakthroughs and highlights in our research in the period from February 1, 2013 to November 30, 2013 were: a) Completion of a preliminary theoretical and computationalmore » study of adiabatic thermal Child-Langmuir flow (Mok, 2013); and b) Presentation of an invited paper entitled ?Adiabatic Thermal Beams in a Periodic Focusing Field? at Space Charge 2013 Workshop, CERN, April 16-19, 2013 (Chen, 2013). In this report, an introductory background for the research project is provided. Basic theory of adiabatic thermal Child-Langmuir flow is reviewed. Results of simulation studies of adiabatic thermal Child-Langmuir flows are discussed.« less

  2. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  3. About Separation of Hadron and Electromagnetic Cascades in the Pamela Calorimeter

    NASA Astrophysics Data System (ADS)

    Stozhkov, Yuri I.; Basili, A.; Bencardino, R.; Casolino, M.; de Pascale, M. P.; Furano, G.; Menicucci, A.; Minori, M.; Morselli, A.; Picozza, P.; Sparvoli, R.; Wischnewski, R.; Bakaldin, A.; Galper, A. M.; Koldashov, S. V.; Korotkov, M. G.; Mikhailov, V. V.; Voronov, S. A.; Yurkin, Y. T.; Adriani, O.; Bonechi, L.; Bongi, M.; Papini, P.; Ricciarini, S. B.; Spillantini, P.; Straulino, S.; Taccetti, F.; Vannuccini, E.; Castellini, G.; Boezio, M.; Bonvicini, M.; Mocchiutti, E.; Schiavon, P.; Vacchi, A.; Zampa, G.; Zampa, N.; Carlson, P.; Lund, J.; Lundquist, J.; Orsi, S.; Pearce, M.; Barbarino, G. C.; Campana, D.; Osteria, G.; Rossi, G.; Russo, S.; Boscherini, M.; Mennh, W.; Simonh, M.; Bongiorno, L.; Ricci, M.; Ambriola, M.; Bellotti, R.; Cafagna, F.; Circella, M.; de Marzo, C.; Giglietto, N.; Mirizzi, N.; Romita, M.; Spinelli, P.; Bogomolov, E.; Krutkov, S.; Vasiljev, G.; Bazilevskaya, G. A.; Kvashnin, A. N.; Logachev, V. I.; Makhmutov, V. S.; Maksumov, O. S.; Stozhkov, Yu. I.; Mitchell, J. W.; Streitmatter, R. E.; Stochaj, S. J.

    Results of calibration of the PAMELA instrument at the CERN facilities are discussed. In September, 2003, the calibration of the Neutron Detector together with the Calorimeter was performed with the CERN beams of electrons and protons with energies of 20 - 180 GeV. The implementation of the Neutron Detector increases a rejection factor of hadrons from electrons about ten times. The results of calibration are in agreement with calculations.

  4. DAMPE prototype and its beam test results at CERN

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Hu, Yiming; Chang, Jin

    The first Chinese high energy cosmic particle detector(DAMPE) aims to detect electron/gamma at the range between 5GeV and 10TeV in space. A prototype of this detector is made and tested using both cosmic muons and test beam at CERN. Energy and space resolution as well as strong separation power for electron and proton are shown in the results. The detector structure is illustrated as well.

  5. Measurement of the inclusive jet cross section at the CERN pp collider

    NASA Astrophysics Data System (ADS)

    Arnison, G.; Albrow, M. G.; Allkofer, O. C.; Astbury, A.; Aubert, B.; Bacci, C.; Batley, J. R.; Bauer, G.; Bettini, A.; Bézaguet, A.; Bock, R. K.; Bos, K.; Buckley, E.; Bunn, J.; Busetto, G.; Catz, P.; Cennini, P.; Centro, S.; Ceradini, F.; Ciapetti, G.; Cittolin, S.; Clarke, D.; Cline, D.; Cochet, C.; Colas, J.; Colas, P.; Corden, M.; Cox, G.; Dallman, D.; Dau, D.; Debeer, M.; Debrion, J. P.; Degiorgi, M.; della Negra, M.; Demoulin, M.; Denby, B.; Denegri, D.; Diciaccio, A.; Dobrzynski, L.; Dorenbosch, J.; Dowell, J. D.; Duchovni, E.; Edgecock, R.; Eggert, K.; Eisenhandler, E.; Ellis, N.; Erhard, P.; Faissner, H.; Fince Keeler, M.; Flynn, P.; Fontaine, G.; Frey, R.; Frühwirth, R.; Garvey, J.; Gee, D.; Geer, S.; Ghesquière, C.; Ghez, P.; Ghio, F.; Giacomelli, P.; Gibson, W. R.; Giraud-Héraud, Y.; Givernaud, A.; Gonidec, A.; Goodman, M.; Grassmann, H.; Grayer, G.; Guryn, W.; Hansl-Kozanecka, T.; Haynes, W.; Haywood, S. J.; Hoffmann, H.; Holthuizen, D. J.; Homer, R. J.; Homer, R. J.; Honma, A.; Jank, W.; Jimack, M.; Jorat, G.; Kalmus, P. I. P.; Karimäri, V.; Keeler, R.; Kenyon, I.; Kernan, A.; Kienzle, W.; Kinnunen, R.; Kozanecki, W.; Kroll, J.; Kryn, D.; Kyberd, P.; Lacava, F.; Laugier, J. P.; Lees, J. P.; Leuchs, R.; Levegrun, S.; Lévêque, A.; Levi, M.; Linglin, D.; Locci, E.; Long, K.; Markiewicz, T.; Markytan, M.; Martin, T.; Maurin, F.; McMahon, T.; Mendiburu, J.-P.; Meneguzzo, A.; Meyer, O.; Meyer, T.; Minard, M.-N.; Mohammadi, M.; Morgan, K.; Moricca, M.; Moser, H.; Mours, B.; Muller, Th.; Nandi, A.; Naumann, L.; Norton, A.; Paoluzi, L.; Pascoli, D.; Pauss, F.; Perault, C.; Piano Mortari, G.; Pietarinen, E.; Pigot, C.; Pimiä, M.; Pitman, D.; Placci, A.; Porte, J.-P.; Radermacher, E.; Ransdell, J.; Redelberger, T.; Reithler, H.; Revol, J. P.; Richman, J.; Rijssenbeek, M.; Rohlf, J.; Rossi, P.; Roberts, C.; Ruhm, W.; Rubbia, C.; Sajot, G.; Salvini, G.; Sass, J.; Sadoulet, B.; Samyn, D.; Savoy-Navarro, A.; Schinzel, D.; Schwartz, A.; Scott, W.; Scott, W.; Shah, T. P.; Sheer, I.; Siotis, I.; Smith, D.; Sobie, R.; Sphicas, P.; Strauss, J.; Streets, J.; Stubenrauch, C.; Summers, D.; Sumorok, K.; Szonczo, F.; Tao, C.; Ten Have, I.; Thompson, G.; Tscheslog, E.; Tuominiemi, J.; van Eijk, B.; Verecchia, P.; Vialle, J. P.; Virdee, T. S.; von der Schmitt, H.; von Schlippe, W.; Vrana, J.; Vuillemin, V.; Wahl, H. D.; Watkins, P.; Wilke, R.; Wilson, J.; Wingerter, I.; Wimpenny, S. J.; Wulz, C.-E.; Wyatt, T.; Yvert, M.; Zacharov, I.; Zaganidis, N.; Zanello, L.; Zotto, P.

    1986-05-01

    The inclusive jet cross section has been measured in the UA1 experiment at the CERN pp Collider at centre-of-mass energies √s = 546 GeV and √s = 630 eV. The cross sections are found to be consistent with QCD predictions, The observed change in the cross section with the centre-of-mass energy √s is accounted for in terms of xT scaling.

  6. Highlights from High Energy Neutrino Experiments at CERN

    NASA Astrophysics Data System (ADS)

    Schlatter, W.-D.

    2015-07-01

    Experiments with high energy neutrino beams at CERN provided early quantitative tests of the Standard Model. This article describes results from studies of the nucleon quark structure and of the weak current, together with the precise measurement of the weak mixing angle. These results have established a new quality for tests of the electroweak model. In addition, the measurements of the nucleon structure functions in deep inelastic neutrino scattering allowed first quantitative tests of QCD.

  7. PARTICLE PHYSICS: CERN Collider Glimpses Supersymmetry--Maybe.

    PubMed

    Seife, C

    2000-07-14

    Last week, particle physicists at the CERN laboratory in Switzerland announced that by smashing together matter and antimatter in four experiments, they detected an unexpected effect in the sprays of particles that ensued. The anomaly is subtle, and physicists caution that it might still be a statistical fluke. If confirmed, however, it could mark the long-sought discovery of a whole zoo of new particles--and the end of a long-standing model of particle physics.

  8. The management of large cabling campaigns during the Long Shutdown 1 of LHC

    NASA Astrophysics Data System (ADS)

    Meroli, S.; Machado, S.; Formenti, F.; Frans, M.; Guillaume, J. C.; Ricci, D.

    2014-03-01

    The Large Hadron Collider at CERN entered into its first 18 month-long shutdown period in February 2013. During this period the entire CERN accelerator complex will undergo major consolidation and upgrade works, preparing the machines for LHC operation at nominal energy (7 TeV/beam). One of the most challenging activities concerns the cabling infrastructure (copper and optical fibre cables) serving the CERN data acquisition, networking and control systems. About 1000 kilometres of cables, distributed in different machine areas, will be installed, representing an investment of about 15 MCHF. This implies an extraordinary challenge in terms of project management, including resource and activity planning, work execution and quality control. The preparation phase of this project started well before its implementation, by defining technical solutions and setting financial plans for staff recruitment and material supply. Enhanced task coordination was further implemented by deploying selected competences to form a central support team.

  9. CERN@school: demonstrating physics with the Timepix detector

    NASA Astrophysics Data System (ADS)

    Whyntie, T.; Bithray, H.; Cook, J.; Coupe, A.; Eddy, D.; Fickling, R. L.; McKenna, J.; Parker, B.; Paul, A.; Shearer, N.

    2015-10-01

    This article shows how the Timepix hybrid silicon pixel detector, developed by the Medipix2 Collaboration, can be used by students and teachers alike to demonstrate some key aspects of any well-rounded physics curriculum with CERN@school. After an overview of the programme, the detector's capabilities for measuring and visualising ionising radiation are examined. The classification of clusters - groups of adjacent pixels - is discussed with respect to identifying the different types of particles. Three demonstration experiments - background radiation measurements, radiation profiles and the attenuation of radiation - are described; these can used as part of lessons or as inspiration for independent research projects. Results for exemplar data-sets are presented for reference, as well as details of ongoing research projects inspired by these experiments. Interested readers are encouraged to join the CERN@school Collaboration and so contribute to achieving the programme's aim of inspiring the next generation of scientists and engineers.

  10. CERN's approach to public outreach

    NASA Astrophysics Data System (ADS)

    Landua, Rolf

    2016-03-01

    CERN's communication goes beyond publishing scientific results. Education and outreach are equally important ways of communicating with the general public, and in particular with the young generation. Over the last decade, CERN has significantly increased its efforts to accommodate the very large interest of the general public (about 300,000 visit requests per year), by ramping up its capacity for guided tours from 25,000 to more than 100,000 visitors per year, by creating six new of state-of-the-art exhibitions on-site, by building and operating a modern physics laboratory for school teachers and students, and by showing several traveling exhibitions in about 10 countries per year. The offer for school teachers has also been expanded, to 35-40 weeks of teacher courses with more than 1000 participants from more than 50 countries per year. The talk will give an overview about these and related activities.

  11. ALICE HLT Cluster operation during ALICE Run 2

    NASA Astrophysics Data System (ADS)

    Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.

  12. Computer-aided design of liposomal drugs: In silico prediction and experimental validation of drug candidates for liposomal remote loading.

    PubMed

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-10

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.

  13. Computer-aided design of liposomal drugs: in silico prediction and experimental validation of drug candidates for liposomal remote loading

    PubMed Central

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-01

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343

  14. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the success of the workshop. Further information on ACAT 2011 can be found at http://acat2011.cern.ch Dr Liliana Teodorescu Brunel University ACATgroup The PDF also contains details of the workshop's committees and sponsors.

  15. International Workshop on Linear Colliders 2010

    ScienceCinema

    Lebrun, Ph.

    2018-06-20

    IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland). This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options. Contact Workshop Secretariat  IWLC2010 is hosted by CERN.

  16. CERN: A global project

    NASA Astrophysics Data System (ADS)

    Voss, Rüdiger

    2017-07-01

    In the most important shift of paradigm of its membership rules in 60 years, CERN in 2010 introduced a policy of “Geographical Enlargement” which for the first time opened the door for membership of non-European States in the Organization. This short article reviews briefly the history of CERN’s membership rules, discusses the rationale behind the new policy, its relationship with the emerging global roadmap of particle physics, and gives a short overview of the status of the enlargement process.

  17. International Workshop on Linear Colliders 2010

    ScienceCinema

    Yamada, Sakue

    2018-05-24

    IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options. Contact Workshop Secretariat  IWLC2010 is hosted by CERN

  18. Performance of a liquid argon time projection chamber exposed to the CERN West Area Neutrino Facility neutrino beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arneodo, F.; Cavanna, F.; Mitri, I. De

    2006-12-01

    We present the results of the first exposure of a Liquid Argon TPC to a multi-GeV neutrino beam. The data have been collected with a 50 liters ICARUS-like chamber located between the CHORUS and NOMAD experiments at the CERN West Area Neutrino Facility (WANF). We discuss both the instrumental performance of the detector and its capability to identify and reconstruct low-multiplicity neutrino interactions.

  19. Upper limits of the proton magnetic form factor in the time-like region from p¯p--> e+e- at the CERN-ISR

    NASA Astrophysics Data System (ADS)

    Baglin, C.; Baird, S.; Bassompierre, G.; Borreani, G.; Brient, J. C.; Broll, C.; Brom, J. M.; Bugge, L.; Buran, T.; Burq, J. P.; Bussière, A.; Buzzo, A.; Cester, R.; Chemarin, M.; Chevallier, M.; Escoubes, B.; Fay, J.; Ferroni, S.; Gracco, V.; Guillaud, J. P.; Khan-Aronsen, E.; Kirsebom, K.; Ille, B.; Lambert, M.; Leistam, L.; Lundby, A.; Macri, M.; Marchetto, F.; Mattera, L.; Menichetti, E.; Mouellic, B.; Pastrone, N.; Petrillo, L.; Pia, M. G.; Poulet, M.; Pozzo, A.; Rinaudo, G.; Santroni, A.; Severi, M.; Skjevling, G.; Stapnes, S.; Stugu, B.; Tomasini, F.; Valbusa, U.

    1985-11-01

    From the measurement of e+e- pairs from the reaction p¯p-->e+e- at the CERN-ISR, using an antiproton beam and a hydrogen jet target, we derived upper limits for the proton magnetic form factor in the time-like region at Q2⋍8.9(GeV/c)2 and Q2⋍12.5(GeV/c)2.

  20. Diffractive Higgs boson production at the Fermilab Tevatron and the CERN Large Hadron Collider.

    PubMed

    Enberg, R; Ingelman, G; Kissavos, A; Tîmneanu, N

    2002-08-19

    Improved possibilities to find the Higgs boson in diffractive events, having less hadronic activity, depend on whether the cross section is large enough. Based on the soft color interaction models that successfully describe diffractive hard scattering at DESY HERA and the Fermilab Tevatron, we find that only a few diffractive Higgs events may be produced at the Tevatron, but we predict a substantial rate at the CERN Large Hadron Collider.

  1. Internet end-to-end performance monitoring for the High Energy Nuclear and Particle Physics community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, W.

    2000-02-22

    Modern High Energy Nuclear and Particle Physics (HENP) experiments at Laboratories around the world present a significant challenge to wide area networks. Petabytes (1015) or exabytes (1018) of data will be generated during the lifetime of the experiment. Much of this data will be distributed via the Internet to the experiment's collaborators at Universities and Institutes throughout the world for analysis. In order to assess the feasibility of the computing goals of these and future experiments, the HENP networking community is actively monitoring performance across a large part of the Internet used by its collaborators. Since 1995, the pingER projectmore » has been collecting data on ping packet loss and round trip times. In January 2000, there are 28 monitoring sites in 15 countries gathering data on over 2,000 end-to-end pairs. HENP labs such as SLAC, Fermi Lab and CERN are using Advanced Network's Surveyor project and monitoring performance from one-way delay of UDP packets. More recently several HENP sites have become involved with NLANR's active measurement program (AMP). In addition SLAC and CERN are part of the RIPE test-traffic project and SLAC is home for a NIMI machine. The large End-to-end performance monitoring infrastructure allows the HENP networking community to chart long term trends and closely examine short term glitches across a wide range of networks and connections. The different methodologies provide opportunities to compare results based on different protocols and statistical samples. Understanding agreement and discrepancies between results provides particular insight into the nature of the network. This paper will highlight the practical side of monitoring by reviewing the special needs of High Energy Nuclear and Particle Physics experiments and provide an overview of the experience of measuring performance across a large number of interconnected networks throughout the world with various methodologies. In particular, results from each project will be compared and disagreement will be analyzed. The goal is to address issues for improving understanding for gathering and analysis of accurate monitoring data, but the outlook for the computing goals of HENP will also be examined.« less

  2. Integrating new Storage Technologies into EOS

    NASA Astrophysics Data System (ADS)

    Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul

    2015-12-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.

  3. A possible biomedical facility at the European Organization for Nuclear Research (CERN)

    PubMed Central

    Dosanjh, M; Myers, S

    2013-01-01

    A well-attended meeting, called “Brainstorming discussion for a possible biomedical facility at CERN”, was held by the European Organization for Nuclear Research (CERN) at the European Laboratory for Particle Physics on 25 June 2012. This was concerned with adapting an existing, but little used, 78-m circumference CERN synchrotron to deliver a wide range of ion species, preferably from protons to at least neon ions, with beam specifications that match existing clinical facilities. The potential extensive research portfolio discussed included beam ballistics in humanoid phantoms, advanced dosimetry, remote imaging techniques and technical developments in beam delivery, including gantry design. In addition, a modern laboratory for biomedical characterisation of these beams would allow important radiobiological studies, such as relative biological effectiveness, in a dedicated facility with standardisation of experimental conditions and biological end points. A control photon and electron beam would be required nearby for relative biological effectiveness comparisons. Research beam time availability would far exceed that at other facilities throughout the world. This would allow more rapid progress in several biomedical areas, such as in charged hadron therapy of cancer, radioisotope production and radioprotection. The ethos of CERN, in terms of open access, peer-reviewed projects and governance has been so successful for High Energy Physics that application of the same to biomedicine would attract high-quality research, with possible contributions from Europe and beyond, along with potential new funding streams. PMID:23549990

  4. High fidelity 3-dimensional models of beam-electron cloud interactions in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feiz Zarrin Ghalam, Ali

    Electron cloud is a low-density electron profile created inside the vacuum chamber of circular machines with positively charged beams. Electron cloud limits the peak current of the beam and degrades the beams' quality through luminosity degradation, emittance growth and head to tail or bunch to bunch instability. The adverse effects of electron cloud on long-term beam dynamics becomes more and more important as the beams go to higher and higher energies. This problem has become a major concern in many future circular machines design like the Large Hadron Collider (LHC) under construction at European Center for Nuclear Research (CERN). Due to the importance of the problem several simulation models have been developed to model long-term beam-electron cloud interaction. These models are based on "single kick approximation" where the electron cloud is assumed to be concentrated at one thin slab around the ring. While this model is efficient in terms of computational costs, it does not reflect the real physical situation as the forces from electron cloud to the beam are non-linear contrary to this model's assumption. To address the existing codes limitation, in this thesis a new model is developed to continuously model the beam-electron cloud interaction. The code is derived from a 3-D parallel Particle-In-Cell (PIC) model (QuickPIC) originally used for plasma wakefield acceleration research. To make the original model fit into circular machines environment, betatron and synchrotron equations of motions have been added to the code, also the effect of chromaticity, lattice structure have been included. QuickPIC is then benchmarked against one of the codes developed based on single kick approximation (HEAD-TAIL) for the transverse spot size of the beam in CERN-LHC. The growth predicted by QuickPIC is less than the one predicted by HEAD-TAIL. The code is then used to investigate the effect of electron cloud image charges on the long-term beam dynamics, particularly on the transverse tune shift of the beam at CERN Super Proton Synchrotron (SPS) ring. The force from the electron cloud image charges on the beam cancels the force due to cloud compression formed on the beam axis and therefore the tune shift is mainly due to the uniform electron cloud density. (Abstract shortened by UMI.)

  5. Measurements and FLUKA Simulations of Bismuth, Aluminium and Indium Activation at the upgraded CERN Shielding Benchmark Facility (CSBF)

    NASA Astrophysics Data System (ADS)

    Iliopoulou, E.; Bamidis, P.; Brugger, M.; Froeschl, R.; Infantino, A.; Kajimoto, T.; Nakao, N.; Roesler, S.; Sanami, T.; Siountas, A.; Yashima, H.

    2018-06-01

    The CERN High energy AcceleRator Mixed field (CHARM) facility is situated in the CERN Proton Synchrotron (PS) East Experimental Area. The facility receives a pulsed proton beam from the CERN PS with a beam momentum of 24 GeV/c with 5·1011 protons per pulse with a pulse length of 350 ms and with a maximum average beam intensity of 6.7·1010 protons per second. The extracted proton beam impacts on a cylindrical copper target. The shielding of the CHARM facility includes the CERN Shielding Benchmark Facility (CSBF) situated laterally above the target that allows deep shielding penetration benchmark studies of various shielding materials. This facility has been significantly upgraded during the extended technical stop at the beginning of 2016. It consists now of 40 cm of cast iron shielding, a 200 cm long removable sample holder concrete block with 3 inserts for activation samples, a material test location that is used for the measurement of the attenuation length for different shielding materials as well as for sample activation at different thicknesses of the shielding materials. Activation samples of bismuth, aluminium and indium were placed in the CSBF in September 2016 to characterize the upgraded version of the CSBF. Monte Carlo simulations with the FLUKA code have been performed to estimate the specific production yields of bismuth isotopes (206 Bi, 205 Bi, 204 Bi, 203 Bi, 202 Bi, 201 Bi) from 209 Bi, 24 Na from 27 Al and 115 m I from 115 I for these samples. The production yields estimated by FLUKA Monte Carlo simulations are compared to the production yields obtained from γ-spectroscopy measurements of the samples taking the beam intensity profile into account. The agreement between FLUKA predictions and γ-spectroscopy measurements for the production yields is at a level of a factor of 2.

  6. Two-particle correlations in azimuthal angle and pseudorapidity in inelastic p + p interactions at the CERN Super Proton Synchrotron

    DOE PAGES

    Aduszkiewicz, A.; Ali, Y.; Andronov, E.; ...

    2017-01-30

    Results on two-particle ΔηΔΦ correlations in inelastic p + p interactions at 20, 31, 40, 80, and 158 GeV/c are presented. The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron. The data show structures which can be attributed mainly to effects of resonance decays, momentum conservation, and quantum statistics. Furthermore, the results are compared with the Epos and UrQMD models.

  7. News UK public libraries offer walk-in access to research Atoms for Peace? The Atomic Weapons Establishment and UK universities Students present their research to academics: CERN@school Science in a suitcase: Marvin and Milo visit Ethiopia Inspiring telescopes A day for everyone teaching physics 2014 Forthcoming Events

    NASA Astrophysics Data System (ADS)

    2014-05-01

    UK public libraries offer walk-in access to research Atoms for Peace? The Atomic Weapons Establishment and UK universities Students present their research to academics: CERN@school Science in a suitcase: Marvin and Milo visit Ethiopia Inspiring telescopes A day for everyone teaching physics 2014 Forthcoming Events

  8. Overview of LHC physics results at ICHEP

    ScienceCinema

    Mangano, Michelangelo

    2018-06-20

    This month LHC physics day will review the physics results presented by the LHC experiments at the 2010 ICHEP in Paris. The experimental presentations will be preceeded by the bi-weekly LHC accelerator status report.The meeting will be broadcast via EVO (detailed info will appear at the time of the meeting in the "Video Services" item on the left menu bar). For those attending, information on accommodation, access to CERN and laptop registration is available from http://cern.ch/lpcc/visits

  9. CERN at 60: giant magnet journeys through Geneva

    NASA Astrophysics Data System (ADS)

    Banks, Michael

    2014-07-01

    More than 30,000 people descended onto Geneva's harbour last month to celebrate the bicentenary of the city's integration into Switzerland with a parade through the city. Joining the 1200 participants at the Genève200 celebrations were staff from the CERN particle-physics lab, which is located on the outskirts of Geneva, who paraded a superconducting dipole magnet - similar to the thousands used in the Large Hadron Collider - through the city's narrow streets on a 20 m lorry.

  10. Astronomie, écologie et poésie par Hubert Reeves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-09-21

    Hubert ReevesL'astrophysicien donne une conférence puis s'entretient avec l'écrivain François Bon autour de :"Astronomie, écologie et poésie"Pour plus d'informations : http://outreach.web.cern.ch/outreach/FR/evenements/conferences.htmlNombre de places limité. Réservation obligatoire à la Réception du CERN : +41 22 767 76 76  Soirée diffusée en direct sur le Web : http://webcast.cern.ch/      

  11. Retirement Kjell Johnsen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2007-12-05

    A l'occasion de son 65me anniversaire plusieurs orateurs (aussi l'ambassadeur de Norvège) remercient Kjell Johnsen, né en juin 1921 en Norvège, pour ses 34 ans de service au Cern et retracent sa vie et son travail. K.Johnsen a pris part aux premières études sur les accélérateurs du futur centre de physique et fut aussi le père et le premier directeur de l'Ecole du Cern sur les accélérateurs (CAS)

  12. News Conference: Physics brings the community together Training: CERN trains physics teachers Education: World conference fosters physics collaborations Lecture: Physics education live at ASE Prize: Physics teacher wins first Moore medal Festival: European presidents patronize Science on Stage festival Videoconference: Videoconference brings Durban closer to the classroom

    NASA Astrophysics Data System (ADS)

    2012-03-01

    Conference: Physics brings the community together Training: CERN trains physics teachers Education: World conference fosters physics collaborations Lecture: Physics education live at ASE Prize: Physics teacher wins first Moore medal Festival: European presidents patronize Science on Stage festival Videoconference: Videoconference brings Durban closer to the classroom

  13. News Festival: Science on stage deadline approaches Conference: Welsh conference attracts teachers Data: New phase of CERN openlab tackles exascale IT challenges for science Meeting: German Physical Society holds its physics education spring meeting Conference: Association offers golden opportunity in Norway Competition: So what's the right answer then?

    NASA Astrophysics Data System (ADS)

    2012-07-01

    Festival: Science on stage deadline approaches Conference: Welsh conference attracts teachers Data: New phase of CERN openlab tackles exascale IT challenges for science Meeting: German Physical Society holds its physics education spring meeting Conference: Association offers golden opportunity in Norway Competition: So what's the right answer then?

  14. Overview of LHC physics results at ICHEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-02-25

     This month LHC physics day will review the physics results presented by the LHC experiments at the 2010 ICHEP in Paris. The experimental presentations will be preceeded by the bi-weekly LHC accelerator status report.The meeting will be broadcast via EVO (detailed info will appear at the time of the meeting in the "Video Services" item on the left menu bar)For those attending, information on accommodation, access to CERN and laptop registration is available from http://cern.ch/lpcc/visits

  15. Measurement of the antiproton-nucleus annihilation cross-section at low energy

    NASA Astrophysics Data System (ADS)

    Aghai-Khozani, H.; Bianconi, A.; Corradini, M.; Hayano, R.; Hori, M.; Leali, M.; Lodi Rizzini, E.; Mascagna, V.; Murakami, Y.; Prest, M.; Vallazza, E.; Venturelli, L.; Yamada, H.

    2018-02-01

    Systematic measurements of the annihilation cross sections of low energy antinucleons were performed at CERN in the 80's and 90's. However the antiproton data on medium-heavy and heavy nuclear targets are scarce. The ASACUSA Collaboration at CERN has measured the antiproton annihilation cross section on carbon at 5.3 MeV: the value is (1.73 ± 0.25) barn. The result is compared with the antineutron experimental data and with the theoretical previsions.

  16. High Energy Electron Detection with ATIC

    NASA Technical Reports Server (NTRS)

    Chang, J.; Schmidt, W. K. H.; Adams, James H., Jr.; Ahn, H.; Ampe, J.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The ATIC (Advanced Thin Ionization Calorimeter) balloon-borne ionization calorimeter is well suited to record and identify high energy cosmic ray electrons. The instrument was exposed to high-energy beams at CERN H2 bean-dine in September of 1999. We have simulated the performance of the instrument, and compare the simulations with actual high energy electron exposures at the CERN accelerator. Simulations and measurements do not compare exactly, in detail, but overall the simulations have predicted actual measured behavior quite well.

  17. Optical fibres in the radiation environment of CERN

    NASA Astrophysics Data System (ADS)

    Guillermain, E.

    2017-11-01

    CERN, the European Organization for Nuclear Research (in Geneva, Switzerland), is home to a complex scientific instrument: the 27-kilometre Large Hadron Collider (LHC) collides beams of high-energy particles at close to the speed of light. Optical fibres are widely used at CERN, both in surface areas (e.g. for inter-building IT networks) and in the accelerator complex underground (e.g. for cryogenics, vacuum, safety systems). Optical fibres in the accelerator are exposed to mixed radiation fields (mainly composed of protons, pions, neutrons and other hadrons, gamma rays and electrons), with dose rates depending on the particular installation zone, and with radiation levels often significantly higher than those encountered in space. In the LHC and its injector chain radiation levels range from relatively low annual doses of a few Gy up to hundreds of kGy. Optical fibres suffer from Radiation Induced Attenuation (RIA, expressed in dB per unit length) that affect light transmission and which depends on the irradiation conditions (e.g. dose rate, total dose, temperature). In the CERN accelerator complex, the failure of an optical link can affect the proper functionality of control or monitoring systems and induce the interruption of the accelerator operation. The qualification of optical fibres for installation in critical radiation areas is therefore crucial. Thus, all optical fibre types installed in radiation areas at CERN are subject to laboratory irradiation tests, in order to evaluate their RIA at different total dose and dose rates. This allows the selection of the appropriate optical fibre type (conventional or radiation resistant) compliant with the requirements of each installation. Irradiation tests are performed in collaboration with Fraunhofer INT (irradiation facilities and expert team in Euskirchen, Germany). Conventional off-the-shelf optical fibres can be installed for optical links exposed to low radiation levels (i.e. annual dose typically below few kGy). Nevertheless, the conventional optical fibres must be carefully qualified as a spread in RIA of factor 10 is observed among optical fibres of different types and dopants. In higher radiation areas, special radiation resistant optical fibres are installed. For total dose above 1 kGy, the RIA of these special optical fibres is at least 10 times lower than the conventional optical fibres RIA at same irradiation conditions. 2400 km of these special radiation resistant optical fibres were recently procured at CERN. As part of this procurement process, a quality assurance plan including the irradiation testing of all 65 produced batches was set up. This presentation will review the selection process of the appropriate optical fibre types to be installed in the radiation environment of CERN. The methodology for choosing the irradiation parameters for the laboratory tests will be discussed together with an overview of the RIA of different optical fibre types under several irradiation conditions.

  18. Measuring mumbo jumbo: A preliminary quantification of the use of jargon in science communication.

    PubMed

    Sharon, Aviv J; Baram-Tsabari, Ayelet

    2014-07-01

    Leaders of the scientific community encourage scientists to learn effective science communication, including honing the skill to discuss science with little professional jargon. However, avoiding jargon is not trivial for scientists for several reasons, and this demands special attention in teaching and evaluation. Despite this, no standard measurement for the use of scientific jargon in speech has been developed to date. Here a standard yardstick for the use of scientific jargon in spoken texts, using a computational linguistics approach, is proposed. Analyzed transcripts included academic speech, scientific TEDTalks, and communication about the discovery of a Higgs-like boson at CERN. Findings suggest that scientists use less jargon in communication with a general audience than in communication with peers, but not always less obscure jargon. These findings may lay the groundwork for evaluating the use of jargon.

  19. A multi-port 10GbE PCIe NIC featuring UDP offload and GPUDirect capabilities.

    NASA Astrophysics Data System (ADS)

    Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Stanislao Paolucci, Pier; Pastorelli, Elena; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Tosoratto, Laura; Vicini, Piero

    2015-12-01

    NaNet-10 is a four-ports 10GbE PCIe Network Interface Card designed for low-latency real-time operations with GPU systems. To this purpose the design includes an UDP offload module, for fast and clock-cycle deterministic handling of the transport layer protocol, plus a GPUDirect P2P/RDMA engine for low-latency communication with NVIDIA Tesla GPU devices. A dedicated module (Multi-Stream) can optionally process input UDP streams before data is delivered through PCIe DMA to their destination devices, re-organizing data from different streams guaranteeing computational optimization. NaNet-10 is going to be integrated in the NA62 CERN experiment in order to assess the suitability of GPGPU systems as real-time triggers; results and lessons learned while performing this activity will be reported herein.

  20. The third level trigger and output event unit of the UA1 data-acquisition system

    NASA Astrophysics Data System (ADS)

    Cittolin, S.; Demoulin, M.; Fucci, A.; Haynes, W.; Martin, B.; Porte, J. P.; Sphicas, P.

    1989-12-01

    The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The user interface to this system is based on a series of Macintosh personal computer connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an eveluation of its performance are presented.

  1. LHC, le Big Bang en éprouvette

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Notre compréhension de l’Univers est en train de changer… Bar des Sciences - Tout public Débat modéré par Marie-Odile Montchicourt, journaliste de France Info. Evenement en vidéoconférence entre le Globe de la science et de l’innovation, le bar le Baloard de Montpellier et la Maison des Métallos à Paris. Intervenants au CERN : Philippe Charpentier et Daniel Froideveaux, physiciens au CERN. Intervenants à Paris : Vincent Bontemps, philosophe et chercheur au CEA ; Jacques Arnould, philosophe, historien des sciences et théologien, Jean-Jacques Beineix, réalisateur, producteur, scénariste de cinéma. Intervenants à Montpellier (LPTA) : André Neveu, physicien théoricien et directeur demore » recherche au CNRS ; Gilbert Moultaka, physicien théoricien et chargé de recherche au CNRS. Partenariat : CERN, CEA, IN2P3, Université MPL2 (LPTA) Dans le cadre de la Fête de la science 2008.« less

  2. Disk storage at CERN

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.

    2015-12-01

    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.

  3. First test of BNL electron beam ion source with high current density electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pikin, Alexander, E-mail: pikin@bnl.gov; Alessi, James G., E-mail: pikin@bnl.gov; Beebe, Edward N., E-mail: pikin@bnl.gov

    A new electron gun with electrostatic compression has been installed at the Electron Beam Ion Source (EBIS) Test Stand at BNL. This is a collaborative effort by BNL and CERN teams with a common goal to study an EBIS with electron beam current up to 10 A, current density up to 10,000 A/cm{sup 2} and energy more than 50 keV. Intensive and pure beams of heavy highly charged ions with mass-to-charge ratio < 4.5 are requested by many heavy ion research facilities including NASA Space Radiation Laboratory (NSRL) at BNL and HIE-ISOLDE at CERN. With a multiampere electron gun, themore » EBIS should be capable of delivering highly charged ions for both RHIC facility applications at BNL and for ISOLDE experiments at CERN. Details of the electron gun simulations and design, and the Test EBIS electrostatic and magnetostatic structures with the new electron gun are presented. The experimental results of the electron beam transmission are given.« less

  4. Protocols for Scholarly Communication

    NASA Astrophysics Data System (ADS)

    Pepe, A.; Yeomans, J.

    2007-10-01

    CERN, the European Organization for Nuclear Research, has operated an institutional preprint repository for more than 10 years. The repository contains over 850,000 records of which more than 450,000 are full-text OA preprints, mostly in the field of particle physics, and it is integrated with the library's holdings of books, conference proceedings, journals and other grey literature. In order to encourage effective propagation and open access to scholarly material, CERN is implementing a range of innovative library services into its document repository: automatic keywording, reference extraction, collaborative management tools and bibliometric tools. Some of these services, such as user reviewing and automatic metadata extraction, could make up an interesting testbed for future publishing solutions and certainly provide an exciting environment for e-science possibilities. The future protocol for scientific communication should guide authors naturally towards OA publication, and CERN wants to help reach a full open access publishing environment for the particle physics community and related sciences in the next few years.

  5. First experimental evidence of hydrodynamic tunneling of ultra-relativistic protons in extended solid copper target at the CERN HiRadMat facility

    NASA Astrophysics Data System (ADS)

    Schmidt, R.; Blanco Sancho, J.; Burkart, F.; Grenier, D.; Wollmann, D.; Tahir, N. A.; Shutov, A.; Piriz, A. R.

    2014-08-01

    A novel experiment has been performed at the CERN HiRadMat test facility to study the impact of the 440 GeV proton beam generated by the Super Proton Synchrotron on extended solid copper cylindrical targets. Substantial hydrodynamic tunneling of the protons in the target material has been observed that leads to significant lengthening of the projectile range, which confirms our previous theoretical predictions [N. A. Tahir et al., Phys. Rev. Spec. Top.-Accel. Beams 15, 051003 (2012)]. Simulation results show very good agreement with the experimental measurements. These results have very important implications on the machine protection design for powerful machines like the Large Hadron Collider (LHC), the future High Luminosity LHC, and the proposed huge 80 km circumference Future Circular Collider, which is currently being discussed at CERN. Another very interesting outcome of this work is that one may also study the field of High Energy Density Physics at this test facility.

  6. First experience with carbon stripping foils for the 160 MeV H- injection into the CERN PSB

    NASA Astrophysics Data System (ADS)

    Weterings, Wim; Bracco, Chiara; Jorat, Louise; Noulibos, Remy; van Trappen, Pieter

    2018-05-01

    160 MeV H- beam will be delivered from the new CERN linear accelerator (Linac4) to the Proton Synchrotron Booster (PSB), using a H- charge-exchange injection system. A 200 µg/cm2 carbon stripping foil will convert H- into protons by stripping off the electrons. The H- charge-exchange injection principle will be used for the first time in the CERN accelerator complex and involves many challenges. In order to gain experience with the foil changing mechanism and the very fragile foils, in 2016, prior to the installation in the PSB, a stripping foil test stand has been installed in the Linac4 transfer line. In addition, parts of the future PSB injection equipment are also temporarily installed in the Linac4 transfer line for tests with a 160 MeV H- commissioning proton beam. This paper describes the foil changing mechanism and control system, summarizes the practical experience of gluing and handling these foils and reports on the first results with beam.

  7. Chicago Ebola Response Network (CERN): A Citywide Cross-hospital Collaborative for Infectious Disease Preparedness.

    PubMed

    Lateef, Omar; Hota, Bala; Landon, Emily; Kociolek, Larry K; Morita, Julie; Black, Stephanie; Noskin, Gary; Kelleher, Michael; Curell, Krista; Galat, Amy; Ansell, David; Segreti, John; Weber, Stephen G

    2015-11-15

    The 2014-2015 Ebola virus disease (EVD) epidemic and international public health emergency has been referred to as a "black swan" event, or an event that is unlikely, hard to predict, and highly impactful once it occurs. The Chicago Ebola Response Network (CERN) was formed in response to EVD and is capable of receiving and managing new cases of EVD, while also laying the foundation for a public health network that can anticipate, manage, and prevent the next black swan public health event. By sharing expertise, risk, and resources among 4 major academic centers, Chicago created a sustainable network to respond to the latest in a series of public health emergencies. In this respect, CERN is a roadmap for how a region can prepare to respond to public health emergencies, thereby preventing negative impacts through planning and implementation. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. x509-free access to WLCG resources

    NASA Astrophysics Data System (ADS)

    Short, H.; Manzi, A.; De Notaris, V.; Keeble, O.; Kiryanov, A.; Mikkonen, H.; Tedesco, P.; Wartel, R.

    2017-10-01

    Access to WLCG resources is authenticated using an x509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified. In this work we will show a solution with the goal of providing access to WLCG resources using the user’s home organisations credentials, without the need for user-acquired x509 certificates. In particular, we focus on identity providers within eduGAIN, which interconnects research and education organisations worldwide, and enables the trustworthy exchange of identity-related information. eduGAIN has been integrated at CERN in the SSO infrastructure so that users can authenticate without the need of a CERN account. This solution achieves x509-free access to Grid resources with the help of two services: STS and an online CA. The STS (Security Token Service) allows credential translation from the SAML2 format used by Identity Federations to the VOMS-enabled x509 used by most of the Grid. The IOTA CA (Identifier-Only Trust Assurance Certification Authority) is responsible for the automatic issuing of short-lived x509 certificates. The IOTA CA deployed at CERN has been accepted by EUGridPMA as the CERN LCG IOTA CA, included in the IGTF trust anchor distribution and installed by the sites in WLCG. We will also describe the first pilot projects which are integrating the solution.

  9. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  10. Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.

    2016-03-01

    The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.

  11. News Music: Here comes science that rocks Student trip: Two views of the future of CERN Classroom: Researchers can motivate pupils Appointment: AstraZeneca trust appoints new director Multimedia: Physics Education comes to YouTube Competition: Students compete in European Union Science Olympiad 2010 Physics roadshow: Pupils see wonders of physics

    NASA Astrophysics Data System (ADS)

    2010-07-01

    Music: Here comes science that rocks Student trip: Two views of the future of CERN Classroom: Researchers can motivate pupils Appointment: AstraZeneca trust appoints new director Multimedia: Physics Education comes to YouTube Competition: Students compete in European Union Science Olympiad 2010 Physics roadshow: Pupils see wonders of physics

  12. AMS data production facilities at science operations center at CERN

    NASA Astrophysics Data System (ADS)

    Choutko, V.; Egorov, A.; Eline, A.; Shan, B.

    2017-10-01

    The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment on the board of the International Space Station (ISS). This paper presents the hardware and software facilities of Science Operation Center (SOC) at CERN. Data Production is built around production server - a scalable distributed service which links together a set of different programming modules for science data transformation and reconstruction. The server has the capacity to manage 1000 paralleled job producers, i.e. up to 32K logical processors. Monitoring and management tool with Production GUI is also described.

  13. Ceremony 25th birthday Cern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2006-05-08

    Célébration du 25ème anniversaire du Cern (jour par jour) avec discours de L.Van Hove et J.B.Adams, des interludes musicals offerts par Mme Mey et ses collègues (au debut 1.mouvement du quatuor avec piano no 3 de L.van Beethoven) Les directeurs généraux procéderont à la remise du souvenir aux membres de personnel ayant 25 années de service dans l'organisation. Un témoignage de reconnaissance est auss fait à l'interprète Mme Zwerner

  14. Experience in running relational databases on clustered storage

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Potocky, Miroslav

    2015-12-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  15. The ISOLDE LEGO® robot: building interest in frontier research

    NASA Astrophysics Data System (ADS)

    Elias Cocolios, Thomas; Lynch, Kara M.; Nichols, Emma

    2017-07-01

    An outreach programme centred around nuclear physics making use of a LEGO® Mindstorm® kit is presented. It consists of a presentation given by trained undergraduate students as science ambassadors followed by a workshop where the target audience programs the LEGO® Mindstorm® robots to familiarise themselves with the concepts in an interactive and exciting way. This programme has been coupled with the CERN-ISOLDE 50th anniversary and the launch of the CERN-MEDICIS facility in Geneva, Switzerland. The modular aspect of the programme readily allows its application to other topics.

  16. Neutron-induced fission cross section measurement of 233U, 241Am and 243Am in the energy range 0.5 MeV En 20 MeV at n TOF at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belloni, F.; Milazzo, P. M.; Calviani, M.

    2012-01-01

    Neutron-induced fission cross section measurements of 233U, 243Am and 241Am relative to 235U have been carried out at the neutron time-of-flight facility n TOF at CERN. A fast ionization chamber has been employed. All samples were located in the same detector; therefore the studied elements and the reference 235U target are subject to the same neutron beam.

  17. The CERN-EU high-energy Reference Field (CERF) facility: applications and latest developments

    NASA Astrophysics Data System (ADS)

    Silari, Marco; Pozzi, Fabio

    2017-09-01

    The CERF facility at CERN provides an almost unique high-energy workplace reference radiation field for the calibration and test of radiation protection instrumentation employed at high-energy accelerator facilities and for aircraft and space dosimetry. This paper describes the main features of the facility and supplies a non-exhaustive list of recent (as of 2005) applications for which CERF is used. Upgrade work started in 2015 to provide the scientific and industrial communities with a state-of-the-art reference facility is also discussed.

  18. Windows Terminal Servers Orchestration

    NASA Astrophysics Data System (ADS)

    Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim

    2017-10-01

    Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.

  19. Astronomie, écologie et poésie par Hubert Reeves

    ScienceCinema

    None

    2017-12-09

    Hubert ReevesL'astrophysicien donne une conférence puis s'entretient avec l'écrivain François Bon autour de :"Astronomie, écologie et poésie"Pour plus d'informations : http://outreach.web.cern.ch/outreach/FR/evenements/conferences.htmlNombre de places limité. Réservation obligatoire à la Réception du CERN : +41 22 767 76 76  Soirée diffusée en direct sur le Web : http://webcast.cern.ch/      

  20. H4DAQ: a modern and versatile data-acquisition package for calorimeter prototypes test-beams

    NASA Astrophysics Data System (ADS)

    Marini, A. C.

    2018-02-01

    The upgrade of the particle detectors for the HL-LHC or for future colliders requires an extensive program of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system, H4DAQ, was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted in various applications for the CMS experiment and AIDA project. Several calorimeter prototypes and precision timing detectors have used our system from 2014 to 2017. H4DAQ has proven to be a versatile application and has been ported to many other beam test environments. H4DAQ is fast, simple, modular and can be configured to support various kinds of setup. The functionalities of the DAQ core software are split into three configurable finite state machines: data readout, run control, and event builder. The distribution of information and data between the various computers is performed using ZEROMQ (0MQ) sockets. Plugins are available to read different types of hardware, including VME crates with many types of boards, PADE boards, custom front-end boards and beam instrumentation devices. The raw data are saved as ROOT files, using the CERN C++ ROOT libraries. A Graphical User Interface, based on the python gtk libraries, is used to operate the H4DAQ and an integrated data quality monitoring (DQM), written in C++, allows for fast processing of the events for quick feedback to the user. As the 0MQ libraries are also available for the National Instruments LabVIEW program, this environment can easily be integrated within H4DAQ applications.

  1. A Security Monitoring Framework For Virtualization Based HEP Infrastructures

    NASA Astrophysics Data System (ADS)

    Gomez Ramirez, A.; Martinez Pedreira, M.; Grigoras, C.; Betev, L.; Lara, C.; Kebschull, U.; ALICE Collaboration

    2017-10-01

    High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.

  2. "Sci-Tech - Couldn't be without it !"

    NASA Astrophysics Data System (ADS)

    2002-03-01

    Launch of a Major European Outreach Programme Seven of Europe's leading Research Organizations [1] launch joint outreach programme for the European Science and Technology Week at the Technopolis Museum in Brussels on 22 March. Their aim is to show Europeans how today's society couldn't be without fundamental research . Could you imagine life without mobile phones, cars, CD players, TV, refrigerators, computers, the internet and the World Wide Web, antibiotics, vitamins, anaesthetics, vaccination, heating, pampers, nylon stockings, glue, bar codes, metal detectors, contact lenses, modems, laser printers, digital cameras, gameboys, play stations...? Technology is everywhere and used by everyone in today's society, but how many Europeans suspect that without studies on the structure of the atom, lasers would not exist, and neither would CD players? Most do not realise that most things they couldn't be without have required years of fundamental research . To fill this knowledge gap, the leading Research Organizations in Europe [1], with the support of the research directorate of the European Commission, have joined forces to inform Europeans how technology couldn't be without science, and how science can no longer progress without technology. The project is called...... Sci-Tech - Couldn't be without it! Sci-Tech - Couldn't be without it! invites Europeans to vote online in a survey to identify the top ten technologies they can't live without. It will show them through a dynamic and entertaining Web space where these top technologies really come from, and it will reveal their intimate links with research. Teaching kits will be developed to explain to students how their favourite gadgets actually work, and how a career in science can contribute to inventions that future generations couldn't be without. The results of the survey will be presented as a series of quiz shows live on the Internet during the Science Week, from 4 to 10 November. Sci-tech - Couldn't be without it! will be launched on Friday 22 March at 18:30 at the Technopolis Science Museum in Brussels , coinciding with the official inauguration of CERN's travelling exhibition "E=mc 2 - When energy becomes matter". The exhibition will stay at Technopolis until 21 July. CERN Director General, Luciano Maiani , and European Commissioner for Research, Philippe Busquin , will open the event with speeches underlining the importance of joining efforts for science education and outreach in Europe. A tour of the exhibition and a demonstration of the project's hot site for cool science will follow, and the event will be brought to a close with a "Science in the Pub" discussion on the subject of modern physics and philosophy, complete with musical intermezzo and buffet. * Access the Couldn't be without it! online voting and web resources at: www.cern.ch/sci-tech * Confirm your presence at the Technopolis event before Wed. March 20 by fax to: +32-(0)15-34 20 10 * To reach Technopolis take exit 10 (Mechelen-Zuid) on motorway E19 (Bruxelles-Anvers). * For more information on the exhibition, contact Veronique de Man: veronique@technopolis.be; Tel. +32-15-34 2020 * For more information on Couldn't be without it! contact the executive coordinator: monica.de.pasquale@cern.ch; Tel. +41 22 767 3586 Note [1] CERN , the European Organisation for Nuclear Research, ESA , the European Space Agency, ESO , the European Southern Observatory, EMBL , the European Molecular Biology Laboratory, EFDA , the European Fusion Development Agreement, ESRF , the European Synchrotron Radiation Facility and ILL , Institut Laue-Langevin. These organizations have recently teamed up to form EIROFORUM (cf. ESO PR 12/01 ), whose Working Group on Outreach and Education is working with the European Union to provide a bridge between the organisations, the European Union and the citizens of Europe. The activities of the Working Group also contribute to the creation of the European Research Area.

  3. Antiproton Trapping for Advanced Space Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Smith, Gerald A.

    1998-01-01

    The Summary of Research parallels the Statement of Work (Appendix I) submitted with the proposal, and funded effective Feb. 1, 1997 for one year. A proposal was submitted to CERN in October, 1996 to carry out an experiment on the synthesis and study of fundamental properties of atomic antihydrogen. Since confined atomic antihydrogen is potentially the most powerful and elegant source of propulsion energy known, its confinement and properties are of great interest to the space propulsion community. Appendix II includes an article published in the technical magazine Compressed Air, June 1997, which describes CERN antiproton facilities, and ATHENA. During the period of this grant, Prof. Michael Holzscheiter served as spokesman for ATHENA and, in collaboration with Prof. Gerald Smith, worked on the development of the antiproton confinement trap, which is an important part of the ATHENA experiment. Appendix III includes a progress report submitted to CERN on March 12, 1997 concerning development of the ATHENA detector. Section 4.1 reviews technical responsibilities within the ATHENA collaboration, including the Antiproton System, headed by Prof. Holzscheiter. The collaboration was advised (see Appendix IV) on June 13, 1997 that the CERN Research Board had approved ATHENA for operation at the new Antiproton Decelerator (AD), presently under construction. First antiproton beams are expected to be delivered to experiments in about one year. Progress toward assembly of the ATHENA detector and initial testing expected in 1999 has been excellent. Appendix V includes a copy of the minutes of the most recently documented collaboration meeting held at CERN of October 24, 1997, which provides more information on development of systems, including the antiproton trapping apparatus. On February 10, 1998 Prof. Smith gave a 3 hour lecture on the Physics of Antimatter, as part of the Physics for the Third Millennium Lecture Series held at MSFC. Included in Appendix VI are notes and graphs presented on the ATHENA experiment. Portable antiproton trap has been under development. The goal is to store and transport antiprotons from a production site, such as Fermilab near Chicago, to a distant site, such as Huntsville, AL, thus demonstrating the portability of antiprotons.

  4. [The Big Data Game : On the Ludic Constitution of the Collaborative Production of Knowledge in High-Energy Physics at CERN].

    PubMed

    Dippel, Anne

    2017-12-01

    This article looks at how games and play contribute to the big data-driven production of knowledge in High-Energy Physics, with a particular focus on the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN), where the author has been conducting anthropological fieldwork since 2014. The ludic (playful) aspect of knowledge production is analyzed here in three different dimensions: the Symbolic, the Ontological, and the Epistemic. The first one points towards CERN as place where a cosmological game of probability is played with the help of Monte-Carlo simulations. The second one can be seen in the agonistic infrastructures of competing experimental collaborations. The third dimension unfolds in ludic platforms, such as online Challenges and citizen science games, which contribute to the development of machine learning algorithms, whose function is necessary in order to process the huge amount of data gathered from experimental events. Following Clifford Geertz, CERN itself is characterized as a site of deep play, a concept that contributes to understanding wider social and cultural orders through the analysis of ludic collective phenomena. The article also engages with Peter Galison's idea of the trading zone, proposing to comprehend it in the age of big data as a Playground. Thus the author hopes to contribute to a wider discussion in the historiographical and social study of science and technology, as well as in cultural anthropology, by recognizing the ludic in science as a central element of understanding collaborative knowledge production.

  5. Comparison of electromagnetic and hadronic models generated using Geant 4 with antiproton dose measured in CERN.

    PubMed

    Tavakoli, Mohammad Bagher; Reiazi, Reza; Mohammadi, Mohammad Mehdi; Jabbari, Keyvan

    2015-01-01

    After proposing the idea of antiproton cancer treatment in 1984 many experiments were launched to investigate different aspects of physical and radiobiological properties of antiproton, which came from its annihilation reactions. One of these experiments has been done at the European Organization for Nuclear Research known as CERN using the antiproton decelerator. The ultimate goal of this experiment was to assess the dosimetric and radiobiological properties of beams of antiprotons in order to estimate the suitability of antiprotons for radiotherapy. One difficulty on this way was the unavailability of antiproton beam in CERN for a long time, so the verification of Monte Carlo codes to simulate antiproton depth dose could be useful. Among available simulation codes, Geant4 provides acceptable flexibility and extensibility, which progressively lead to the development of novel Geant4 applications in research domains, especially modeling the biological effects of ionizing radiation at the sub-cellular scale. In this study, the depth dose corresponding to CERN antiproton beam energy by Geant4 recruiting all the standard physics lists currently available and benchmarked for other use cases were calculated. Overall, none of the standard physics lists was able to draw the antiproton percentage depth dose. Although, with some models our results were promising, the Bragg peak level remained as the point of concern for our study. It is concluded that the Bertini model with high precision neutron tracking (QGSP_BERT_HP) is the best to match the experimental data though it is also the slowest model to simulate events among the physics lists.

  6. A New Event Builder for CMS Run II

    NASA Astrophysics Data System (ADS)

    Albertsson, K.; Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.

  7. A new event builder for CMS Run II

    DOE PAGES

    Albertsson, K.; Andre, J-M; Andronidis, A.; ...

    2015-12-23

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Innibandmore » FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. In conclusion, ee present performance measurements from small-scale prototypes and from the full-scale production system.« less

  8. Basic concepts and architectural details of the Delphi trigger system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bocci, V.; Booth, P.S.L.; Bozzo, M.

    1995-08-01

    Delphi (DEtector with Lepton, Photon and Hadron Identification) is one of the four experiments of the LEP (Large Electron Positron) collider at CERN. The detector is laid out to provide a nearly 4 {pi} coverage for charged particle tracking, electromagnetic, hadronic calorimetry and extended particle identification. The trigger system consists of four levels. The first two are synchronous with the BCO (Beam Cross Over) and rely on hardwired control units, while the last two are performed asynchronously with respect to the BCO and are driven by the Delphi host computers. The aim of this paper is to give a comprehensivemore » global view of the trigger system architecture, presenting in detail the first two levels, their various hardware components and the latest modifications introduced in order to improve their performance and make more user friendly the whole software user interface.« less

  9. Software Management for the NOνAExperiment

    NASA Astrophysics Data System (ADS)

    Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.

    2015-12-01

    The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.

  10. Analysis of relativistic nucleus-nucleus interactions in emulsion chambers

    NASA Technical Reports Server (NTRS)

    Mcguire, Stephen C.

    1987-01-01

    The development of a computer-assisted method is reported for the determination of the angular distribution data for secondary particles produced in relativistic nucleus-nucleus collisions in emulsions. The method is applied to emulsion detectors that were placed in a constant, uniform magnetic field and exposed to beams of 60 and 200 GeV/nucleon O-16 ions at the Super Proton Synchrotron (SPS) of the European Center for Nuclear Research (CERN). Linear regression analysis is used to determine the azimuthal and polar emission angles from measured track coordinate data. The software, written in BASIC, is designed to be machine independent, and adaptable to an automated system for acquiring the track coordinates. The fitting algorithm is deterministic, and takes into account the experimental uncertainty in the measured points. Further, a procedure for using the track data to estimate the linear momenta of the charged particles observed in the detectors is included.

  11. SHAREv2: fluctuations and a comprehensive treatment of decay feed-down

    NASA Astrophysics Data System (ADS)

    Torrieri, G.; Jeon, S.; Letessier, J.; Rafelski, J.

    2006-11-01

    This the user's manual for SHARE version 2. SHARE [G. Torrieri, S. Steinke, W. Broniowski, W. Florkowski, J. Letessier, J. Rafelski, Comput. Phys. Comm. 167 (2005) 229] (Statistical Hadronization with Resonances) is a collection of programs designed for the statistical analysis of particle production in relativistic heavy-ion collisions. While the structure of the program remains similar to v1.x, v2 provides several new features such as evaluation of statistical fluctuations of particle yields, and a greater versatility, in particular regarding decay feed-down and input/output structure. This article describes all the new features, with emphasis on statistical fluctuations. Program summaryTitle of program:SHAREv2 Catalogue identifier:ADVD_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVD_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC, Pentium III, 512 MB RAM not hardware dependent Operating system:Linux: RedHat 6.1, 7.2, FEDORA, etc. not system dependent Programming language:FORTRAN77 Size of the package:167 KB directory, without libraries (see http://wwwasdoc.web.cern.ch/wwwasdoc/minuit/minmain.html, http://wwwasd.web.cern.ch/wwwasd/cernlib.html for details on library requirements) Number of lines in distributed program, including test data, etc.:26 101 Number of bytes in distributed program, including test data, etc.:170 346 Distribution format:tar.gzip file Computer:Any computer with an f77 compiler Nature of the physical problem:Event-by-event fluctuations have been recognized to be the physical observable capable to constrain particle production models. Therefore, consideration of event-by-event fluctuations is required for a decisive falsification or constraining of (variants of) particle production models based on (grand-, micro-) canonical statistical mechanics phase space, the so called statistical hadronization models (SHM). As in the case of particle yields, to properly compare model calculations to data it is necessary to consistently take into account resonance decays. However, event-by-event fluctuations are more sensitive than particle yields to experimental acceptance issues, and a range of techniques needs to be implemented to extract 'physical' fluctuations from an experimental event-by-event measurement. Method of solving the problem:The techniques used within the SHARE suite of programs [G. Torrieri, S. Steinke, W. Broniowski, W. Florkowski, J. Letessier, J. Rafelski, Comput. Phys. Comm. 167 (2005) 229; SHAREv1] are updated and extended to fluctuations. A full particle data-table, decay tree, and set of experimental feed-down coefficients are provided. Unlike SHAREv1.x, experimental acceptance feed-down coefficients can be entered for any resonance decay. SHAREv2 can calculate yields, fluctuations, and bulk properties of the fireball from provided thermal parameters; alternatively, parameters can be obtained from fits to experimental data, via the MINUIT fitting algorithm [F. James, M. Roos, Comput. Phys. Comm. 10 (1975) 343]. Fits can also be analyzed for significance, parameter and data point sensitivity. Averages and fluctuations at freeze-out of both the stable particles and the hadronic resonances are set according to a statistical prescription, calculated via a series of Bessel functions, using CERN library programs. We also have the option of including finite particle widths of the resonances. A χ minimization algorithm, also from the CERN library programs, is used to perform and analyze the fit. Please see SHAREv1 for more details on these. Purpose:The vast amount of high quality soft hadron production data, from experiments running at the SPS, RHIC, in past at the AGS, and in the near future at the LHC, offers the opportunity for statistical particle production model falsification. This task has turned out to be difficult when considering solely particle yields addressed in the context of SHAREv1.x. For this reason physical conditions at freeze-out remain contested. Inclusion in the analysis of event-by-event fluctuations appears to resolve this issue. Similarly, a thorough analysis including both fluctuations and average multiplicities gives a way to explore the presence and strength of interactions following hadronization (when hadrons form), ending with thermal freeze-out (when all interactions cease). SHAREv2 with fluctuations will also help determine which statistical ensemble (if any), e.g., canonical or grand-canonical, is more physically appropriate for analyzing a given system. Together with resonances, fluctuations can also be used for a direct estimate of the extent the system re-interacts between chemical and thermal freeze-out. We hope and expect that SHAREv2 will contribute to decide if any of the statistical hadronization model variants has a genuine physical connection to hadron particle production. Computation time survey:We encounter, in the FORTRAN version computation, times up to seconds for evaluation of particle yields. These rise by up to a factor of 300 in the process of minimization and a further factor of a few when χ/N profiles and contours with chemical non-equilibrium are requested. Summary of new features (w.r.t. SHAREv1.x)Fluctuations:In addition to particle yields, ratios and bulk quantities SHAREv2 can calculate, fit and analyze statistical fluctuations of particles and particle ratios Decays:SHAREv2 has the flexibility to account for any experimental method of allowing for decay feed-downs to the particle yields Charm flavor:Charmed particles have been added to the decay tree, allowing as an option study of statistical hadronization of J/ψ, χ, D, etc. Quark chemistry:Chemical non-equilibrium yields for both u and d flavors, as opposed to generically light quarks q, are considered; η- η mixing, etc., are properly dealt with, and chemical non-equilibrium can be studied for each flavor separately Misc:Many new commands and features have been introduced and added to the basic user interface. For example, it is possible to study combinations of particles and their ratios. It is also possible to combine all the input files into one file. SHARE compatibility and manual:This write-up is an update and extension of SHAREv1. The user should consult SHAREv1 regarding the principles of user interface and for all particle yield related physics and program instructions, other than the parameter additions and minor changes described here. SHAREv2 is downward compatible for the changes of the user interface, offering the user of SHAREv1 a computer generated revised input files compatible with SHAREv2.

  12. Monitoring Evolution at CERN

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Fiorini, B.; Murphy, S.; Pigueiras, L.; Santos, M.

    2015-12-01

    Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud provisioning and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over 11000 multi-core processor servers, 130 PB disk servers, 100 PB tape robots, and 150 high performance tape drives. To cope with these developments, an evolution of the data centre monitoring tools was also required. This modernisation was based on a number of guiding rules: sustain the increase of resources, adapt to the new dynamic nature of the data centres, make monitoring data easier to share, give more flexibility to Service Managers on how they publish and consume monitoring metrics and logs, establish a common repository of monitoring data, optimise the handling of monitoring notifications, and replace the previous toolset by new open source technologies with large adoption and community support. This contribution describes how these improvements were delivered, present the architecture and technologies of the new monitoring tools, and review the experience of its production deployment.

  13. COMMITTEES: SQM 2007 - International Conference On Strangeness In Quark Matter SQM 2007 - International Conference On Strangeness In Quark Matter

    NASA Astrophysics Data System (ADS)

    2008-04-01

    Local Organising Committee Ivan Králik (IEP SAS, Košice) Vojtěch Petráček (Czechoslovakia Technical University, Prague) Ján Pišút (Comenius University, Bratislava) Emanuele Quercigh (CERN) Karel Šafařík (CERN), Co-chair Ladislav v Sándor (IEP SAS, Košice), Co-chair Boris Tomášik (Mateja Bela University, Banská Bystrica) Jozef Urbán (UPJŠ Košice) International Advisory Committee Jörg Aichelin, Nantes Federico Antinori, Padova Tamás Biró, Budapest Peter Braun-Munzinger, GSI Jean Cleymans, Cape Town László Csernai, Bergen Timothy Hallman, BNL Huan Zhong Huang, UCLA Sonja Kabana, Nantes Roy A Lacey, Stony Brook Carlos Lourenço, CERN Yu-Gang Ma, Shanghai Jes Masden, Aarhus Yasuo Miake, Tsukuba Berndt Müller, Duke Grazyna Odyniec, LBNL Helmut Oeschler, Darmstadt Jan Rafelski, Arizona Hans Georg Ritter, LBNL Jack Sandweiss, Yale George S F Stephans, MIT Horst Stöcker, Frankfurt Thomas Ullrich, BNL Orlando Villalobos-Baillie, Birmingham William A Zajc, Columbia

  14. First experimental evidence of hydrodynamic tunneling of ultra–relativistic protons in extended solid copper target at the CERN HiRadMat facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, R.; Grenier, D.; Wollmann, D.

    2014-08-15

    A novel experiment has been performed at the CERN HiRadMat test facility to study the impact of the 440 GeV proton beam generated by the Super Proton Synchrotron on extended solid copper cylindrical targets. Substantial hydrodynamic tunneling of the protons in the target material has been observed that leads to significant lengthening of the projectile range, which confirms our previous theoretical predictions [N. A. Tahir et al., Phys. Rev. Spec. Top.-Accel. Beams 15, 051003 (2012)]. Simulation results show very good agreement with the experimental measurements. These results have very important implications on the machine protection design for powerful machines like themore » Large Hadron Collider (LHC), the future High Luminosity LHC, and the proposed huge 80 km circumference Future Circular Collider, which is currently being discussed at CERN. Another very interesting outcome of this work is that one may also study the field of High Energy Density Physics at this test facility.« less

  15. LHC, le Big Bang en éprouvette

    ScienceCinema

    None

    2017-12-09

    Notre compréhension de l’Univers est en train de changer… Bar des Sciences - Tout public Débat modéré par Marie-Odile Montchicourt, journaliste de France Info. Evenement en vidéoconférence entre le Globe de la science et de l’innovation, le bar le Baloard de Montpellier et la Maison des Métallos à Paris. Intervenants au CERN : Philippe Charpentier et Daniel Froideveaux, physiciens au CERN. Intervenants à Paris : Vincent Bontemps, philosophe et chercheur au CEA ; Jacques Arnould, philosophe, historien des sciences et théologien, Jean-Jacques Beineix, réalisateur, producteur, scénariste de cinéma. Intervenants à Montpellier (LPTA) : André Neveu, physicien théoricien et directeur de recherche au CNRS ; Gilbert Moultaka, physicien théoricien et chargé de recherche au CNRS. Partenariat : CERN, CEA, IN2P3, Université MPL2 (LPTA) Dans le cadre de la Fête de la science 2008

  16. Laser resonance ionization spectroscopy on lutetium for the MEDICIS project

    NASA Astrophysics Data System (ADS)

    Gadelshin, V.; Cocolios, T.; Fedoseev, V.; Heinke, R.; Kieck, T.; Marsh, B.; Naubereit, P.; Rothe, S.; Stora, T.; Studer, D.; Van Duppen, P.; Wendt, K.

    2017-11-01

    The MEDICIS-PROMED Innovative Training Network under the Horizon 2020 EU program aims to establish a network of early stage researchers, involving scientific exchange and active cooperation between leading European research institutions, universities, hospitals, and industry. Primary scientific goal is the purpose of providing and testing novel radioisotopes for nuclear medical imaging and radionuclide therapy. Within a closely linked project at CERN, a dedicated electromagnetic mass separator system is presently under installation for production of innovative radiopharmaceutical isotopes at the new CERN-MEDICIS laboratory, directly adjacent to the existing CERN-ISOLDE radioactive ion beam facility. It is planned to implement a resonance ionization laser ion source (RILIS) to ensure high efficiency and unrivaled purity in the production of radioactive ions. To provide a highly efficient ionization process, identification and characterization of a specific multi-step laser ionization scheme for each individual element with isotopes of interest is required. The element lutetium is of primary relevance, and therefore was considered as first candidate. Three two-step excitation schemes for lutetium atoms are presented in this work, and spectroscopic results are compared with data of other authors.

  17. Outsourcing strategy and tendering methodology for the operation and maintenance of CERN’s cryogenic facilities

    NASA Astrophysics Data System (ADS)

    Serio, L.; Bremer, J.; Claudet, S.; Delikaris, D.; Ferlin, G.; Ferrand, F.; Pezzetti, M.; Pirotte, O.

    2017-12-01

    CERN operates and maintains the world largest cryogenic infrastructure ranging from ageing but well maintained installations feeding detectors, test facilities and general services, to the state-of-the-art cryogenic system serving the flagship LHC machine complex. A study was conducted and a methodology proposed to outsource to industry the operation and maintenance of the whole cryogenic infrastructure. The cryogenic installations coupled to non LHC-detectors, test facilities and general services infrastructure have been fully outsourced for operation and maintenance on the basis of performance obligations. The contractor is responsible for the operational performance of the installations based on a yearly operation schedule provided by CERN. The maintenance of the cryogenic system serving the LHC machine and its detectors has been outsourced on the basis of tasks oriented obligations, monitored by key performance indicators. CERN operation team, with the support of the contractor operation team, remains responsible for the operational strategy and performances. We report the analysis, strategy, definition of the requirements and technical specifications as well as the achieved technical and economic performances after one year of operation.

  18. Novel approaches for inspiring students and electrifying the public

    NASA Astrophysics Data System (ADS)

    Lidström, Suzy; Read, Alex; Parke, Stephen; Allen, Roland; Goldfarb, Steven; Mehlhase, Sascha; Ekelöf, Tord; Walker, Alan

    2014-03-01

    We will briefly summarize a wide variety of innovative approaches for inspiring students and stimulating broad public interest in fundamental physics research, as exemplified by recent activities related to the Higgs boson discovery and Higgs-Englert Nobel Prize on behalf of the Swedish Academy, CERN, Fermilab, and the Niels Bohr Institute. Personal interactions with the scientists themselves can be particularly electrifying, and these were encouraged by the wearing of ``Higgs Boson? Ask Me!'' badges, which will be made available to those attending this talk. At CERN, activities include Virtual Visits, (Google) Hangout with CERN, initiatives to grab attention (LEGO models, music videos, art programs, pins, etc.), substantive communication (lab visits and events, museum exhibits, traveling exhibits, local visits, Masterclasses, etc.), and educational activities (summer student programs, semester abroad programs, internships, graduate programs, etc.). For serious students and their teachers, or scientists in other areas, tutorial articles are appropriate. These are most effective if they also incorporate innovative approaches - for example, attractive figures that immediately illustrate the concepts, analogies that will resonate with the reader, and a broadening of perspective. Physica Scripta, Royal Swedish Academy of Sciences.

  19. Enhancing moral agency: clinical ethics residency for nurses.

    PubMed

    Robinson, Ellen M; Lee, Susan M; Zollfrank, Angelika; Jurchak, Martha; Frost, Debra; Grace, Pamela

    2014-09-01

    One antidote to moral distress is stronger moral agency-that is, an enhanced ability to act to bring about change. The Clinical Ethics Residency for Nurses, an educational program developed and run in two large northeastern academic medical centers with funding from the Health Resources and Services Administration, intended to strengthen nurses' moral agency. Drawing on Improving Competencies in Clinical Ethics Consultation: An Education Guide, by the American Society for Bioethics and Humanities, and on the goals of the nursing profession, CERN sought to change attitudes, increase knowledge, and develop skills to act on one's knowledge. One of the key insights the faculty members brought to the design of this program is that knowledge of clinical ethics is not enough to develop moral agency. In addition to lecture-style classes, CERN employed a variety of methods based in adult learning theory, such as active application of ethics knowledge to patient scenarios in classroom discussion, simulation, and the clinical practicum. Overwhelmingly, the feedback from the participants (sixty-seven over three years of the program) indicated that CERN achieved transformative learning. © 2014 by The Hastings Center.

  20. Techniques for hazard analysis and their use at CERN.

    PubMed

    Nuttall, C; Schönbacher, H

    2001-01-01

    CERN, The European Organisation for Nuclear Research is situated near Geneva and has its accelerators and experimental facilities astride the Swiss and French frontiers attracting physicists from all over the world to this unique laboratory. The main accelerator is situated in a 27 km underground ring and the experiments take place in huge underground caverns in order to detect the fragments resulting from the collision of subatomic particles at speeds approaching that of light. These detectors contain many hundreds of tons of flammable materials, mainly plastics in cables and structural components, flammable gases in the detectors themselves, and cryogenic fluids such as helium and argon. The experiments consume high amounts of electrical power, thus the dangers involved have necessitated the use of analytical techniques to identify the hazards and quantify the risks to personnel and the infrastructure. The techniques described in the paper have been developed in the process industries where they have been to be of great value. They have been successfully applied to CERN industrial and experimental installations and, in some cases, have been instrumental in changing the philosophy of the experimentalists and their detectors.

  1. Evaluation results of xTCA equipment for HEP experiments at CERN

    NASA Astrophysics Data System (ADS)

    Di Cosmo, M.; Bobillier, V.; Haas, S.; Joos, M.; Mico, S.; Vasey, F.; Vichoudis, P.

    2013-12-01

    The MicroTCA and AdvancedTCA industry standards are candidate modular electronic platforms for the upgrade of the current generation of high energy physics experiments. The PH-ESE group at CERN launched in 2011 the xTCA evaluation project with the aim of performing technical evaluations and eventually providing support for commercially available components. Different devices from different vendors have been acquired, evaluated and interoperability tests have been performed. This paper presents the test procedures and facilities that have been developed and focuses on the evaluation results including electrical, thermal and interoperability aspects.

  2. ALICE inner tracking system readout electronics prototype testing with the CERN "Giga Bit Transceiver''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.

    The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.

  3. Retirement Kjell Johnsen

    ScienceCinema

    None

    2017-12-09

    A l'occasion de son 65me anniversaire plusieurs orateurs (aussi l'ambassadeur de Norvège) remercient Kjell Johnsen, né en juin 1921 en Norvège, pour ses 34 ans de service au Cern et retracent sa vie et son travail. K.Johnsen a pris part aux premières études sur les accélérateurs du futur centre de physique et fut aussi le père et le premier directeur de l'Ecole du Cern sur les accélérateurs (CAS)

  4. ALICE inner tracking system readout electronics prototype testing with the CERN "Giga Bit Transceiver''

    DOE PAGES

    Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.; ...

    2016-12-28

    The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.

  5. Anomalous single production of the fourth generation quarks at the CERN LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciftci, R.

    Possible anomalous single productions of the fourth standard model generation up and down type quarks at CERN Large Hadron Collider are studied. Namely, pp{yields}u{sub 4}(d{sub 4})X with subsequent u{sub 4}{yields}bW{sup +} process followed by the leptonic decay of the W boson and d{sub 4}{yields}b{gamma} (and its H.c.) decay channel are considered. Signatures of these processes and corresponding standard model backgrounds are discussed in detail. Discovery limits for the quark mass and achievable values of the anomalous coupling strength are determined.

  6. ALICE inner tracking system readout electronics prototype testing with the CERN ``Giga Bit Transceiver''

    NASA Astrophysics Data System (ADS)

    Schambach, J.; Rossewij, M. J.; Sielewicz, K. M.; Aglieri Rinella, G.; Bonora, M.; Ferencei, J.; Giubilato, P.; Vanat, T.

    2016-12-01

    The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. This contribution describes laboratory and radiation testing results with this prototype board set.

  7. W production at large transverse momentum at the CERN Large Hadron Collider.

    PubMed

    Gonsalves, Richard J; Kidonakis, Nikolaos; Sabio Vera, Agustín

    2005-11-25

    We study the production of W bosons at large transverse momentum in pp collisions at the CERN Large Hadron Collider. We calculate the complete next-to-leading order (NLO) corrections to the differential cross section. We find that the NLO corrections provide a large increase to the cross section but, surprisingly, do not reduce the scale dependence relative to leading order (LO). We also calculate next-to-next-to-leading-order (NNLO) soft-gluon corrections and find that, although they are small, they significantly reduce the scale dependence thus providing a more stable result.

  8. Lower limit on dark matter production at the CERN Large Hadron Collider.

    PubMed

    Feng, Jonathan L; Su, Shufang; Takayama, Fumihiro

    2006-04-21

    We evaluate the prospects for finding evidence of dark matter production at the CERN Large Hadron Collider. We consider weakly interacting massive particles (WIMPs) and superWIMPs and characterize their properties through model-independent parametrizations. The observed relic density then implies lower bounds on dark matter production rates as functions of a few parameters. For WIMPs, the resulting signal is indistinguishable from background. For superWIMPs, however, this analysis implies significant production of metastable charged particles. For natural parameters, these rates may far exceed Drell-Yan cross sections and yield spectacular signals.

  9. New radiation protection calibration facility at CERN.

    PubMed

    Brugger, Markus; Carbonez, Pierre; Pozzi, Fabio; Silari, Marco; Vincke, Helmut

    2014-10-01

    The CERN radiation protection group has designed a new state-of-the-art calibration laboratory to replace the present facility, which is >20 y old. The new laboratory, presently under construction, will be equipped with neutron and gamma sources, as well as an X-ray generator and a beta irradiator. The present work describes the project to design the facility, including the facility placement criteria, the 'point-zero' measurements and the shielding study performed via FLUKA Monte Carlo simulations. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  11. ENLIGHT: European network for Light ion hadron therapy.

    PubMed

    Dosanjh, Manjit; Amaldi, Ugo; Mayer, Ramona; Poetter, Richard

    2018-04-03

    The European Network for Light Ion Hadron Therapy (ENLIGHT) was established in 2002 following various European particle therapy network initiatives during the 1980s and 1990s (e.g. EORTC task group, EULIMA/PIMMS accelerator design). ENLIGHT started its work on major topics related to hadron therapy (HT), such as patient selection, clinical trials, technology, radiobiology, imaging and health economics. It was initiated through CERN and ESTRO and dealt with various disciplines such as (medical) physics and engineering, radiation biology and radiation oncology. ENLIGHT was funded until 2005 through the EC FP5 programme. A regular annual meeting structure was started in 2002 and continues until today bringing together the various disciplines and projects and institutions in the field of HT at different European places for regular exchange of information on best practices and research and development. Starting in 2006 ENLIGHT coordination was continued through CERN in collaboration with ESTRO and other partners involved in HT. Major projects within the EC FP7 programme (2008-2014) were launched for R&D and transnational access (ULICE, ENVISION) and education and training networks (Marie Curie ITNs: PARTNER, ENTERVISION). These projects were instrumental for the strengthening of the field of hadron therapy. With the start of 4 European carbon ion and proton centres and the upcoming numerous European proton therapy centres, the future scope of ENLIGHT will focus on strengthening current and developing European particle therapy research, multidisciplinary education and training and general R&D in technology and biology with annual meetings and a continuously strong CERN support. Collaboration with the European Particle Therapy Network (EPTN) and other similar networks will be pursued. Copyright © 2018 CERN. Published by Elsevier B.V. All rights reserved.

  12. Security in the CernVM File System and the Frontier Distributed Database Caching System

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  13. Effects of bulk viscosity and hadronic rescattering in heavy ion collisions at energies available at the BNL Relativistic Heavy Ion Collider and at the CERN Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Ryu, Sangwook; Paquet, Jean-François; Shen, Chun; Denicol, Gabriel; Schenke, Björn; Jeon, Sangyong; Gale, Charles

    2018-03-01

    We describe ultrarelativistic heavy ion collisions at the BNL Relativistic Heavy Ion Collider and the CERN Large Hadron Collider with a hybrid model using the IP-Glasma model for the earliest stage and viscous hydrodynamics and microscopic transport for the later stages of the collision. We demonstrate that within this framework the bulk viscosity of the plasma plays an important role in describing the experimentally observed radial flow and azimuthal anisotropy simultaneously. We further investigate the dependence of observables on the temperature below which we employ the microscopic transport description.

  14. Optimising the Active Muon Shield for the SHiP Experiment at CERN

    NASA Astrophysics Data System (ADS)

    Baranov, A.; Burnaev, E.; Derkach, D.; Filatov, A.; Klyuchnikov, N.; Lantwin, O.; Ratnikov, F.; Ustyuzhanin, A.; Zaitsev, A.

    2017-12-01

    The SHiP experiment is designed to search for very weakly interacting particles beyond the Standard Model which are produced in a 400 GeV/c proton beam dump at the CERN SPS. The critical challenge for this experiment is to keep the Standard Model background level negligible. In the beam dump, around 1011 muons will be produced per second. The muon rate in the spectrometer has to be reduced by at least four orders of magnitude to avoid muoninduced backgrounds. It is demonstrated that new improved active muon shield may be used to magnetically deflect the muons out of the acceptance of the spectrometer.

  15. A Bonner Sphere Spectrometer with extended response matrix

    NASA Astrophysics Data System (ADS)

    Birattari, C.; Dimovasili, E.; Mitaroff, A.; Silari, M.

    2010-08-01

    This paper describes the design, calibration and applications at high-energy accelerators of an extended-range Bonner Sphere neutron Spectrometer (BSS). The BSS was designed by the FLUKA Monte Carlo code, investigating several combinations of materials and diameters of the moderators for the high-energy channels. The system was calibrated at PTB in Braunschweig, Germany, using monoenergetic neutron beams in the energy range 144 keV-19 MeV. It was subsequently tested with Am-Be source neutrons and in the simulated workplace neutron field at CERF (the CERN-EU high-energy reference field facility). Since 2002, it has been employed for neutron spectral measurements around CERN accelerators.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez T, Arturo

    The use of the sophisticated and large underground detectors at CERN for cosmic ray studies has been considered by several groups, e.g. UA1, LEP and LHC detectors. They offer the opportunity to provide large sensitivity area with magnetic analysis which allow a precise determination of the direction of cosmic ray muons as well as their momentum up to the order of some TeV. The aim of this article is to review the observation of high energy cosmic ray muons using precise spectrometers at CERN, mainly LEP detectors as well as the possibility of improve those measurements with LHC apparatus, givingmore » special emphasis to the ACORDE-ALICE cosmic ray physics program.« less

  17. HST at CERN an Amazing Adventure

    NASA Astrophysics Data System (ADS)

    Restivo, Evelyn

    2009-04-01

    The High School Teacher Program (HST) at the European Organization for Nuclear Research, CERN, in Geneva, Switzerland was initiated in 1998 by a group of scientists, as a multicultural international program designed to introduce high school physics teachers to high-energy physics. The goal of the program is to provide experiences and materials that will help teachers lead their students to a better understanding of the physical world. Interacting with physics teachers from around the world leads to new approaches for dealing with educational issues that all teachers encounter. The program includes a variety of tours, a series of lectures and classroom activities about the physics expected from the Large Hadron Collider.

  18. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  19. Mechanical qualification of the support structure for MQXF, the Nb 3Sn low-β quadrupole for the high luminosity LHC

    DOE PAGES

    Juchno, M.; Ambrosio, G.; Anerella, M.; ...

    2016-01-26

    Within the scope of the High Luminosity LHC project, the collaboration between CERN and U.S. LARP is developing new low-β quadrupoles using the Nb 3Sn superconducting technology for the upgrade of the LHC interaction regions. The magnet support structure of the first short model was designed and two units were fabricated and tested at CERN and at LBNL. The structure provides the preload to the collars-coils subassembly by an arrangement of outer aluminum shells pre-tensioned with water-pressurized bladders. For the mechanical qualification of the structure and the assembly procedure, superconducting coils were replaced with solid aluminum “dummy coils”, the structuremore » was preloaded at room temperature, and then cooled-down to 77 K. Mechanical behavior of the magnet structure was monitored with the use of strain gauges installed on the aluminum shells, the dummy coils and the axial preload system. As a result, this paper reports on the outcome of the assembly and the cool-down tests with dummy coils, which were performed at CERN and at LBNL, and presents the strain gauge measurements compared to the 3D finite element model predictions.« less

  20. Hadron-collider limits on new electroweak interactions from the heterotic string

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Aguila, F.; Moreno, J.M.; Quiros, M.

    1990-01-01

    We evaluate the {ital Z}{prime}{r arrow}{ital l}{sup +}l{sup {minus}} cross section at present and future hadron colliders, for the minimal (E{sub 6}) extended electroweak models inspired by superstrings (including renormalization effects on new gauge couplings and new mixing angles). Popular models are discussed for comparison. Analytical expressions for the bounds on the mass of a new gauge boson, {ital M}{sub {ital Z}{prime}}, as a function of the bound on the ratio {ital R}{equivalent to}{sigma}({ital Z}{prime}){ital B}(Z{prime}{r arrow}l{sup +}{ital l}{sup {minus}})/{sigma}({ital Z}){ital B} ({ital Z}{r arrow}{ital l}{sup +}{ital l}{sup {minus}}), are given for the CERN S{ital p {bar p}}S, Fermilab Teva-more » tron, Serpukhov UNK, CERN Large Hadron Collider, and Superconducting Super Collider for the different models. In particular, the {ital M}{sub {ital Z}{prime}} bounds from the present {ital R} limit at CERN, as well as from the eventually available {ital R} limits at Fermilab and at the future hadron colliders (after three months of running at the expected luminosity), are given explicitly.« less

  1. Hadron Collider Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Incandela, J.R.

    2000-03-07

    Experiments are being prepared at the Fermilab Tevatron and the CERN Large Hadron Collider that promise to deliver extraordinary insights into the nature of spontaneous symmetry breaking, and the role of supersymmetry in the universe. This article reviews the goals, challenges, and designs of these experiments. The first hadron collider, the ISR at CERN, has to overcome two initial obstacles. The first was low luminosity, which steadily improved over time. The second was the broad angular spread of interesting events. In this regard Maurice Jacob noted (1): The answer is ... sophisticated detectors covering at least the whole central regionmore » (45{degree} {le} {theta} {le} 135{degree}) and full azimuth. This statement, while obvious today, reflects the major revelation of the ISR period that hadrons have partonic substructure. The result was an unexpectedly strong hadronic yield at large transverse momentum (p{sub T}). Partly because of this, the ISR missed the discovery of the J/{psi} and later missed the {Upsilon}. The ISR era was therefore somewhat less auspicious than it might have been. It did however make important contributions in areas such as jet production and charm excitation and it paved the way for the SPS collider, also at CERN.« less

  2. A Simulation of the Front End Signal Digitization for the ATLAS Muon Spectrometer thin RPC trigger upgrade project

    NASA Astrophysics Data System (ADS)

    Meng, Xiangting; Chapman, John; Levin, Daniel; Dai, Tiesheng; Zhu, Junjie; Zhou, Bing; Um Atlas Group Team

    2016-03-01

    The ATLAS Muon Spectrometer Phase-I (and Phase-II) upgrade includes the BIS78 muon trigger detector project: two sets of eight very thin Resistive Place Chambers (tRPCs) combined with small Monitored Drift Tube (MDT) chambers in the pseudorapidity region 1<| η|<1.3. The tRPCs will be comprised of triplet readout layer in each of the eta and azimuthal phi coordinates, with about 400 readout strips per layer. The anticipated hit rate is 100-200 kHz per strip. Digitization of the strip signals will be done by 32-channel CERN HPTDC chips. The HPTDC is a highly configurable ASIC designed by the CERN Microelectronics group. It can work in both trigger and trigger-less modes, be readout in parallel or serially. For Phase-I operation, a stringent latency requirement of 43 bunch crossings (1075 ns) is imposed. The latency budget for the front end digitization must be kept to a minimal value, ideally less than 350 ns. We conducted detailed HPTDC latency simulations using the Behavioral Verilog code from the CERN group. We will report the results of these simulations run for the anticipated detector operating environment and for various HPTDC configurations.

  3. Analysis of SEL on Commercial SRAM Memories and Mixed-Field Characterization of a Latchup Detection Circuit for LEO Space Applications

    NASA Astrophysics Data System (ADS)

    Secondo, R.; Alía, R. Garcia; Peronnard, P.; Brugger, M.; Masi, A.; Danzeca, S.; Merlenghi, A.; Vaillé, J.-R.; Dusseau, L.

    2017-08-01

    A single event latchup (SEL) experiment based on commercial static random access memory (SRAM) memories has recently been proposed in the framework of the European Organization for Nuclear Research (CERN) Latchup Experiment and Student Satellite nanosatellite low Earth orbit (LEO) space mission. SEL characterization of three commercial SRAM memories has been carried out at the Paul Scherrer Institut (PSI) facility, using monoenergetic focused proton beams and different acquisition setups. The best target candidate was selected and a circuit for SEL detection has been proposed and tested at CERN, in the CERN High Energy AcceleRator Mixed-field facility (CHARM). Experimental results were carried out at test locations representative of the LEO environment, thus providing a full characterization of the SRAM cross sections, together with the analysis of the single-event effect and total ionizing dose of the latchup detection circuit in relation to the particle spectra expected during mission. The setups used for SEL monitoring are described, and details of the proposed circuit components and topology are presented. Experimental results obtained both at PSI and at CHARM facilities are discussed.

  4. Computer Tensor Codes to Design the War Drive

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.

  5. A programming framework for data streaming on the Xeon Phi

    NASA Astrophysics Data System (ADS)

    Chapeland, S.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is the dedicated heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shut-down of the LHC, the ALICE detector will be upgraded to cope with an interaction rate of 50 kHz in Pb-Pb collisions, producing in the online computing system (O2) a sustained throughput of 3.4 TB/s. This data will be processed on the fly so that the stream to permanent storage does not exceed 90 GB/s peak, the raw data being discarded. In the context of assessing different computing platforms for the O2 system, we have developed a framework for the Intel Xeon Phi processors (MIC). It provides the components to build a processing pipeline streaming the data from the PC memory to a pool of permanent threads running on the MIC, and back to the host after processing. It is based on explicit offloading mechanisms (data transfer, asynchronous tasks) and basic building blocks (FIFOs, memory pools, C++11 threads). The user only needs to implement the processing method to be run on the MIC. We present in this paper the architecture, implementation, and performance of this system.

  6. Helix Nebula - the Science Cloud: a public-private partnership to build a multidisciplinary cloud platform for data intensive science

    NASA Astrophysics Data System (ADS)

    Jones, Bob; Casu, Francesco

    2013-04-01

    The feasibility of using commercial cloud services for scientific research is of great interest to research organisations such as CERN, ESA and EMBL, to the suppliers of cloud-based services and to the national and European funding agencies. Through the Helix Nebula - the Science Cloud [1] initiative and with the support of the European Commission, these stakeholders are driving a two year pilot-phase during which procurement processes and governance issues for a framework of public/private partnership will be appraised. Three initial flagship use cases from high energy physics, molecular biology and earth-observation are being used to validate the approach, enable a cost-benefit analysis to be undertaken and prepare the next stage of the Science Cloud Strategic Plan [2] to be developed and approved. The power of Helix Nebula lies in a shared set of services for initially 3 very different sciences each supporting a global community and thus building a common e-Science platform. Of particular relevance is the ESA sponsored flagship application SuperSites Exploitation Platform (SSEP [3]) that offers the global geo-hazard community a common platform for the correlation and processing of observation data for supersites monitoring. The US-NSF Earth Cube [4] and Ocean Observatory Initiative [5] (OOI) are taking a similar approach for data intensive science. The work of Helix Nebula and its recent architecture model [6] has shown that is it technically feasible to allow publicly funded infrastructures, such as EGI [7] and GEANT [8], to interoperate with commercial cloud services. Such hybrid systems are in the interest of the existing users of publicly funded infrastructures and funding agencies because they will provide "freedom of choice" over the type of computing resources to be consumed and the manner in which they can be obtained. But to offer such freedom-of choice across a spectrum of suppliers, various issues such as intellectual property, legal responsibility, service quality agreements and related issues need to be addressed. Investigating these issues is one of the goals of the Helix Nebula initiative. The next generation of researchers will put aside the historical categorisation of research as a neatly defined set of disciplines and integrate the data from different sources and instruments into complex models that are as applicable to earth observation or biomedicine as they are to high-energy physics. This aggregation of datasets and development of new models will accelerate scientific development but will only be possible if the issues of data intensive science described above are addressed. The culture of science has the possibility to develop with the availability of Helix Nebula as a "Science Cloud" because: • Large scale datasets from many disciplines will be accessible • Scientists and others will be able to develop and contribute open source tools to expand the set of services available • Collaboration of scientists will take place around the on-demand availability of data, tools and services • Cross-domain research will advance at a faster pace due to the availability of a common platform. References: 1 http://www.helix-nebula.eu/ 2 http://cdsweb.cern.ch/record/1374172/files/CERN-OPEN-2011-036.pdf 3 http://www.helix-nebula.eu/index.php/helix-nebula-use-cases/uc3.html 4 http://www.nsf.gov/geo/earthcube/ 5 http://www.oceanobservatories.org/ 6 http://cdsweb.cern.ch/record/1478364/files/HelixNebula-NOTE-2012-001.pdf 7 http://www.nsf.gov/geo/earthcube/ 8 http://www.geant.net/

  7. Status of the DIRAC Project

    NASA Astrophysics Data System (ADS)

    Casajus, A.; Ciba, K.; Fernandez, V.; Graciani, R.; Hamar, V.; Mendez, V.; Poss, S.; Sapunov, M.; Stagni, F.; Tsaregorodtsev, A.; Ubeda, M.

    2012-12-01

    The DIRAC Project was initiated to provide a data processing system for the LHCb Experiment at CERN. It provides all the necessary functionality and performance to satisfy the current and projected future requirements of the LHCb Computing Model. A considerable restructuring of the DIRAC software was undertaken in order to turn it into a general purpose framework for building distributed computing systems that can be used by various user communities in High Energy Physics and other scientific application domains. The CLIC and ILC-SID detector projects started to use DIRAC for their data production system. The Belle Collaboration at KEK, Japan, has adopted the Computing Model based on the DIRAC system for its second phase starting in 2015. The CTA Collaboration uses DIRAC for the data analysis tasks. A large number of other experiments are starting to use DIRAC or are evaluating this solution for their data processing tasks. DIRAC services are included as part of the production infrastructure of the GISELA Latin America grid. Similar services are provided for the users of the France-Grilles and IBERGrid National Grid Initiatives in France and Spain respectively. The new communities using DIRAC started to provide important contributions to its functionality. Among recent additions can be mentioned the support of the Amazon EC2 computing resources as well as other Cloud management systems; a versatile File Replica Catalog with File Metadata capabilities; support for running MPI jobs in the pilot based Workload Management System. Integration with existing application Web Portals, like WS-PGRADE, is demonstrated. In this paper we will describe the current status of the DIRAC Project, recent developments of its framework and functionality as well as the status of the rapidly evolving community of the DIRAC users.

  8. EDITORIAL: Lectures from the European RTN Winter School on Strings, Supergravity and Gauge Theories, CERN, 21 25 January 2008

    NASA Astrophysics Data System (ADS)

    Derendinger, J.-P.; Orlando, D.; Uranga, A.

    2008-11-01

    This special issue is devoted to the proceedings of the conference 'RTN Winter School on Strings, Supergravity and Gauge Theories', which took place at CERN, the European Centre for Nuclear Research, in Geneva, Switzerland, on the 21 25 January 2008. This event was organized in the framework of the European Mobility Research and Training Network entitled 'Constituents, Fundamental Forces and Symmetries of the Universe'. It is part of a yearly series of scientific schools, which represents what is by now a well established tradition. The previous ones have been held at SISSA, in Trieste, Italy, in February 2005 and at CERN in January 2006. The next one will again take place at CERN, in February 2009. The school was primarily meant for young doctoral students and postdoctoral researchers working in the area of string theory. It consisted of several general lectures of four hours each, whose notes are published in the present proceedings, and five working group discussion sessions, focused on specific topics of the network research program. It was attended by approximatively 250 participants. The topics of the lectures were chosen to provide an introduction to some of the areas of recent progress, and to the open problems, in string theory. One of the most active areas in string theory in recent years is the AdS/CFT or gauge/gravity correspondence, which proposes the complete equivalence of string theory on (asymptotically) anti-de Sitter spacetimes with gauge theories. The duality relates the weak coupling regime of one system to the strongly coupled regime of the other, and is therefore very non-trivial to test beyond the supersymmetry-protected BPS sector. One of the key ideas to quantitatively match several quantities on both sides is the use of integrability, both in the gauge theory and the string side. The lecture notes by Nick Dorey provide a pedagogical introduction to the fascinating topic of integrability in AdS/CFT. On the string theory side, progress has been limited by the difficulties of quantizing the worldsheet theory in the presence of RR backgrounds. There is increasing hope that these difficulties can be overcome, using the pure spinor formulation of string theory. The lectures by Yaron Oz overview the present status of this proposal. The gauge/gravity correspondence is already leading to important insights into questions of quantum gravity, like the entropy of black holes and its interpretation in terms of microstates. These questions can be addressed in string theory, for certain classes of supersymmetric black holes. The lectures by Vijay Balasubramanian, Jan de Boer, Sheer El-Showk and Ilies Messamah review recent progress in this direction. Throughout the years, formal developments in string theory have systematically led to improved understanding on how it may relate to nature. In this respect, the lectures by Henning Samtleben describe how the formal developments on gauged supergravities can be used to describe compactification vacua in string theory, and their implications for moduli stabilization and supersymmetry breaking. Indeed, softly broken supersymmetry is one of the leading proposals to describe particle physics at the TeV energy range, as described in the lectures by Gian Giudice (not covered in this issue). This connection with TeV scale physics is most appropriate and timely, given that this energy range will shortly become experimentally accessible in the LHC at CERN. The conference was financially supported by the European Commission under contract MRTN-CT-2004-005104 and by CERN. It was jointly organized by the Physics Institute of the University of Neuchâtel and the Theory Unit of the Physics Division of CERN. It is a great pleasure for us to warmly thank the Theory Unit of CERN for its very kind hospitality and for the high quality of the assistance and the infrastructure that it has provided. We also acknowledge helpful administrative assistance from the Physics Institute of the University of Neuchâtel. Special thanks also go to Denis Frank, for his very valuable help in preparing the conference web pages. Group photo

  9. CMS results in the Combined Computing Readiness Challenge CCRC'08

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Bauerdick, L.; CMS Collaboration

    2009-12-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.

  10. Final Scientific EFNUDAT Workshop

    ScienceCinema

    None

    2018-05-23

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluation Cross section measurements Experimental techniques Uncertainties and covariances Fission properties Current and future facilities ; International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden) ;Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuel Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis;Workshop Assistant: Geraldine Jean

  11. Final Scientific EFNUDAT Workshop

    ScienceCinema

    None

    2018-06-20

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.eu. Topics of interest include: Data evaluation, Cross section measurements, Experimental techniques, Uncertainties and covariances, Fission properties, and Current and future facilities. International Advisory Committee: C. Barreau (CENBG, France), T. Belgya (IKI KFKI, Hungary), E. Gonzalez (CIEMAT, Spain), F. Gunsing (CEA, France), F.-J. Hambsch (IRMM, Belgium), A. Junghans (FZD, Germany), R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman), Marco Calviani, Samuel Andriamonje, Eric Berthoumieux, Carlos Guerrero, Roberto Losito, Vasilis Vlachoudis. Workshop Assistant: Geraldine Jean

  12. Final Scientific EFNUDAT Workshop

    ScienceCinema

    Garbil, Roger

    2018-04-16

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.eu. Topics of interest include: Data evaluation; Cross section measurements; Experimental techniques; Uncertainties and covariances; Fission properties; Current and future facilities. International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden). Workshop Organizing Committee: Enrico Chiaveri (Chairman); Marco Calviani; Samuel Andriamonje; Eric Berthoumieux; Carlos Guerrero; Roberto Losito; Vasilis Vlachoudis; Workshop Assistant: Geraldine Jean

  13. Final Scientific EFNUDAT Workshop

    ScienceCinema

    Lantz, Mattias; Neudecker, Denise

    2018-05-25

    Part 5 of The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluation Cross section measurements Experimental techniques Uncertainties and covariances Fission properties Current and future facilities International Advisory Committee: C. Barreau (CENBG, France) T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain) F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium) A. Junghans (FZD, Germany) R. Nolte (PTB, Germany) S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuel Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis Workshop Assistant: Geraldine Jean

  14. Final Scientific EFNUDAT Workshop

    ScienceCinema

    Wilson, J.N.

    2018-05-24

    Part 7 of The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive.EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluation; Cross section measurements; Experimental techniques; Uncertainties and covariances; Fission properties; Current and future facilities;International Advisory Committee: C. Barreau (CENBG, France) T. Belgya (IKI KFKI, Hungary) E. Gonzalez (CIEMAT, Spain) F. Gunsing (CEA, France) F.-J. Hambsch (IRMM, Belgium) A. Junghans (FZD, Germany) R. Nolte (PTB, Germany) S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman) Marco Calviani Samuel Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis Workshop Assistant: Geraldine Jean.

  15. CERN experience and strategy for the maintenance of cryogenic plants and distribution systems

    NASA Astrophysics Data System (ADS)

    Serio, L.; Bremer, J.; Claudet, S.; Delikaris, D.; Ferlin, G.; Pezzetti, M.; Pirotte, O.; Tavian, L.; Wagner, U.

    2015-12-01

    CERN operates and maintains the world largest cryogenic infrastructure ranging from ageing installations feeding detectors, test facilities and general services, to the state-of-the-art cryogenic system serving the flagship LHC machine complex. After several years of exploitation of a wide range of cryogenic installations and in particular following the last two years major shutdown to maintain and consolidate the LHC machine, we have analysed and reviewed the maintenance activities to implement an efficient and reliable exploitation of the installations. We report the results, statistics and lessons learned on the maintenance activities performed and in particular the required consolidations and major overhauling, the organization, management and methodologies implemented.

  16. Effects of bulk viscosity and hadronic rescattering in heavy ion collisions at energies available at the BNL Relativistic Heavy Ion Collider and at the CERN Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Sangwook; Paquet, Jean-Francois; Shen, Chun

    Here, we describe ultrarelativistic heavy ion collisions at the BNL Relativistic Heavy Ion Collider and the CERN Large Hadron Collider with a hybrid model using the IP-Glasma model for the earliest stage and viscous hydrodynamics and microscopic transport for the later stages of the collision. We demonstrate that within this framework the bulk viscosity of the plasma plays an important role in describing the experimentally observed radial flow and azimuthal anisotropy simultaneously. Finally, we further investigate the dependence of observables on the temperature below which we employ the microscopic transport description.

  17. PanDA for COMPASS at JINR

    NASA Astrophysics Data System (ADS)

    Petrosyan, A. Sh.

    2016-09-01

    PanDA (Production and Distributed Analysis System) is a workload management system, widely used for data processing at experiments on Large Hadron Collider and others. COMPASS is a high-energy physics experiment at the Super Proton Synchrotron. Data processing for COMPASS runs locally at CERN, on lxbatch, the data itself stored in CASTOR. In 2014 an idea to start running COMPASS production through PanDA arose. Such transformation in experiment's data processing will allow COMPASS community to use not only CERN resources, but also Grid resources worldwide. During the spring and summer of 2015 installation, validation and migration work is being performed at JINR. Details and results of this process are presented in this paper.

  18. Ian Hinchliffe Answers Your Higgs Boson Questions

    ScienceCinema

    Hinchliffe, Ian

    2017-12-09

    contingent with the ATLAS experiment at CERN, answers many of your questions about the Higgs boson. Ian invited viewers to send in questions about the Higgs via email, Twitter, Facebook, or YouTube in an "Ask a Scientist" video posted July 3: http://youtu.be/xhuA3wCg06s CERN's July 4 announcement that the ATLAS and CMS experiments at the Large Hadron Collider have discovered a particle "consistent with the Higgs boson" has raised questions about what scientists have found and what still remains to be found -- and what it all means. If you have suggestions for future "Ask a Scientist" videos, post them below or send ideas to askascientist@lbl.gov

  19. Studies for the electro-magnetic calorimeter SplitCal for the SHiP experiment at CERN with shower direction reconstruction capability

    NASA Astrophysics Data System (ADS)

    Bonivento, Walter M.

    2018-02-01

    This paper describes the basic ideas and the first simulation results of a new electro-magnetic calorimeter concept, named SplitCal, aimed at optimising the measurement of photon direction in fixed-target experiment configuration, with high photon detection efficiency. This calorimeter was designed for the invariant mass reconstruction of axion-like particles decaying into two photons in the mass range 200 MeV to 1 GeV for the proposed proton beam dump experiment SHiP at CERN. Preliminary results indicate that angular resolutions better than obtained by past experiments can be achieved with this design. An implementation of this concept with real technologies is under study.

  20. Adam: a Unix Desktop Application Manager

    NASA Astrophysics Data System (ADS)

    LiÉBana, M.; Marquina, M.; Ramos, R.

    ADAM stands for Affordable Desktop Application Manager. It is a GUI developed at CERN with the aim to ease access to applications. The motivation to develop ADAM came from the unavailability of environments like COSE/CDE and their heavy resource consumption. ADAM has proven to be user friendly: new users are able to customize it to their needs in few minutes. Groups of users may share through ADAM a common application environment. ADAM also integrates the Unix and the PC world. PC users can access Unix applications in the same way as their usual Windows applications. This paper describes all the ADAM features, how they are used at CERN Public Services, and the future plans for ADAM.

  1. Effects of bulk viscosity and hadronic rescattering in heavy ion collisions at energies available at the BNL Relativistic Heavy Ion Collider and at the CERN Large Hadron Collider

    DOE PAGES

    Ryu, Sangwook; Paquet, Jean-Francois; Shen, Chun; ...

    2018-03-15

    Here, we describe ultrarelativistic heavy ion collisions at the BNL Relativistic Heavy Ion Collider and the CERN Large Hadron Collider with a hybrid model using the IP-Glasma model for the earliest stage and viscous hydrodynamics and microscopic transport for the later stages of the collision. We demonstrate that within this framework the bulk viscosity of the plasma plays an important role in describing the experimentally observed radial flow and azimuthal anisotropy simultaneously. Finally, we further investigate the dependence of observables on the temperature below which we employ the microscopic transport description.

  2. Accelerating hydrodynamic description of pseudorapidity density and the initial energy density in p +p , Cu + Cu, Au + Au, and Pb + Pb collisions at energies available at the BNL Relativistic Heavy Ion Collider and the CERN Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Ze-Fang, Jiang; Chun-Bin, Yang; Csanád, Máté; Csörgő, Tamás

    2018-06-01

    A known class of analytic, exact, accelerating solutions of prefect relativistic hydrodynamics with longitudinal acceleration is utilized to describe results on the pseudorapidity distributions for different collision systems. These results include d N /d η measured in p +p , Cu+Cu, Au+Au, and Pb+Pb collisions at the BNL Relativistic Heavy Ion Collider and the CERN Large Hadron Collider, in a broad centrality range. Going beyond the traditional Bjorken model, from the accelerating hydrodynamic description we determine the initial energy density and other thermodynamic quantities in those collisions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valerio-Lizarraga, Cristhian A., E-mail: cristhian.alfonso.valerio.lizarraga@cern.ch; Departamento de Investigación en Física, Universidad de Sonora, Hermosillo; Lallement, Jean-Baptiste

    The space charge effect of low energy, unbunched ion beams can be compensated by the trapping of ions or electrons into the beam potential. This has been studied for the 45 keV negative hydrogen ion beam in the CERN Linac4 Low Energy Beam Transport using the package IBSimu [T. Kalvas et al., Rev. Sci. Instrum. 81, 02B703 (2010)], which allows the space charge calculation of the particle trajectories. The results of the beam simulations will be compared to emittance measurements of an H{sup −} beam at the CERN Linac4 3 MeV test stand, where the injection of hydrogen gas directlymore » into the beam transport region has been used to modify the space charge compensation degree.« less

  4. The beam and detector of the NA62 experiment at CERN

    NASA Astrophysics Data System (ADS)

    Cortina Gil, E.; Martín Albarrán, E.; Minucci, E.; Nüssle, G.; Padolski, S.; Petrov, P.; Szilasi, N.; Velghe, B.; Georgiev, G.; Kozhuharov, V.; Litov, L.; Husek, T.; Kampf, K.; Zamkovsky, M.; Aliberti, R.; Geib, K. H.; Khoriauli, G.; Kleinknecht, K.; Kunze, J.; Lomidze, D.; Marchevski, R.; Peruzzo, L.; Vormstein, M.; Wanke, R.; Winhart, A.; Bolognesi, M.; Carassiti, V.; Chiozzi, S.; Cotta Ramusino, A.; Gianoli, A.; Malaguti, R.; Dalpiaz, P.; Fiorini, M.; Gamberini, E.; Neri, I.; Norton, A.; Petrucci, F.; Statera, M.; Wahl, H.; Bucci, F.; Ciaranfi, R.; Lenti, M.; Maletta, F.; Volpe, R.; Bizzeti, A.; Cassese, A.; Iacopini, E.; Antonelli, A.; Capitolo, E.; Capoccia, C.; Cecchetti, A.; Corradi, G.; Fascianelli, V.; Gonnella, F.; Lamanna, G.; Lenci, R.; Mannocchi, G.; Martellotti, S.; Moulson, M.; Paglia, C.; Raggi, M.; Russo, V.; Santoni, M.; Spadaro, T.; Tagnani, D.; Valeri, S.; Vassilieva, T.; Cassese, F.; Roscilli, L.; Ambrosino, F.; Capussela, T.; Di Filippo, D.; Massarotti, P.; Mirra, M.; Napolitano, M.; Saracino, G.; Barbanera, M.; Cenci, P.; Checcucci, B.; Duk, V.; Farnesini, L.; Gersabeck, E.; Lupi, M.; Papi, A.; Pepe, M.; Piccini, M.; Scolieri, G.; Aisa, D.; Anzivino, G.; Bizzarri, M.; Campeggi, C.; Imbergamo, E.; Piluso, A.; Santoni, C.; Berretta, L.; Bianucci, S.; Burato, A.; Cerri, C.; Fantechi, R.; Galeotti, S.; Magazzu', G.; Minuti, M.; Orsini, A.; Petragnani, G.; Pontisso, L.; Raffaelli, F.; Spinella, F.; Collazuol, G.; Mannelli, I.; Avanzini, C.; Costantini, F.; Di Lella, L.; Doble, N.; Giorgi, M.; Giudici, S.; Pedreschi, E.; Piandani, R.; Pierazzini, G.; Pinzino, J.; Sozzi, M.; Zaccarelli, L.; Biagioni, A.; Leonardi, E.; Lonardo, A.; Valente, P.; Vicini, P.; D'Agostini, G.; Ammendola, R.; Bonaiuto, V.; De Simone, N.; Federici, L.; Fucci, A.; Paoluzzi, G.; Salamon, A.; Salina, G.; Sargeni, F.; Biino, C.; Dellacasa, G.; Garbolino, S.; Marchetto, F.; Martoiu, S.; Mazza, G.; Rivetti, A.; Arcidiacono, R.; Bloch-Devaux, B.; Boretto, M.; Iacobuzio, L.; Menichetti, E.; Soldi, D.; Engelfried, J.; Estrada-Tristan, N.; Bragadireanu, A. M.; Hutanu, O. E.; Azorskiy, N.; Elsha, V.; Enik, T.; Falaleev, V.; Glonti, L.; Gusakov, Y.; Kakurin, S.; Kekelidze, V.; Kilchakovskaya, S.; Kislov, E.; Kolesnikov, A.; Madigozhin, D.; Misheva, M.; Movchan, S.; Polenkevich, I.; Potrebenikov, Y.; Samsonov, V.; Shkarovskiy, S.; Sotnikov, S.; Tarasova, L.; Zaytseva, M.; Zinchenko, A.; Bolotov, V.; Fedotov, S.; Gushin, E.; Khotjantsev, A.; Khudyakov, A.; Kleimenova, A.; Kudenko, Yu.; Shaikhiev, A.; Gorin, A.; Kholodenko, S.; Kurshetsov, V.; Obraztsov, V.; Ostankov, A.; Rykalin, V.; Semenov, V.; Sugonyaev, V.; Yushchenko, O.; Bician, L.; Blazek, T.; Cerny, V.; Koval, M.; Lietava, R.; Aglieri Rinella, G.; Arroyo Garcia, J.; Balev, S.; Battistin, M.; Bendotti, J.; Bergsma, F.; Bonacini, S.; Butin, F.; Ceccucci, A.; Chiggiato, P.; Danielsson, H.; Degrange, J.; Dixon, N.; Döbrich, B.; Farthouat, P.; Gatignon, L.; Golonka, P.; Girod, S.; Goncalves Martins De Oliveira, A.; Guida, R.; Hahn, F.; Harrouch, E.; Hatch, M.; Jarron, P.; Jamet, O.; Jenninger, B.; Kaplon, J.; Kluge, A.; Lehmann-Miotto, G.; Lichard, P.; Maire, G.; Mapelli, A.; Morant, J.; Morel, M.; Noël, J.; Noy, M.; Palladino, V.; Pardons, A.; Perez-Gomez, F.; Perktold, L.; Perrin-Terrin, M.; Petagna, P.; Poltorak, K.; Riedler, P.; Romagnoli, G.; Ruggiero, G.; Rutter, T.; Rouet, J.; Ryjov, V.; Saputi, A.; Schneider, T.; Stefanini, G.; Theis, C.; Tiuraniemi, S.; Vareia Rodriguez, F.; Venditti, S.; Vergain, M.; Vincke, H.; Wertelaers, P.; Brunetti, M. B.; Edwards, S.; Goudzovski, E.; Hallgren, B.; Krivda, M.; Lazzeroni, C.; Lurkin, N.; Munday, D.; Newson, F.; Parkinson, C.; Pyatt, S.; Romano, A.; Serghi, X.; Sergi, A.; Staley, R.; Sturgess, A.; Heath, H.; Page, R.; Angelucci, B.; Britton, D.; Protopopescu, D.; Skillicorn, I.; Cooke, P.; Dainton, J. B.; Fry, J. R.; Fulton, L.; Hutchcroft, D.; Jones, E.; Jones, T.; Massri, K.; Maurice, E.; McCormick, K.; Sutcliffe, P.; Wrona, B.; Conovaloff, A.; Cooper, P.; Coward, D.; Rubin, P.; Winston, R.

    2017-05-01

    NA62 is a fixed-target experiment at the CERN SPS dedicated to measurements of rare kaon decays. Such measurements, like the branching fraction of the K+ → π+ ν bar nu decay, have the potential to bring significant insights into new physics processes when comparison is made with precise theoretical predictions. For this purpose, innovative techniques have been developed, in particular, in the domain of low-mass tracking devices. Detector construction spanned several years from 2009 to 2014. The collaboration started detector commissioning in 2014 and will collect data until the end of 2018. The beam line and detector components are described together with their early performance obtained from 2014 and 2015 data.

  5. Final Scientific EFNUDAT Workshop

    ScienceCinema

    None

    2018-05-24

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.eu. Topics of interest include: Data evaluation; Cross section measurements; Experimental techniques; Uncertainties and covariances; Fission properties; Current and future facilities. International Advisory Committee: C. Barreau (CENBG, France) T. Belgya (IKI KFKI, Hungary) E. Gonzalez (CIEMAT, Spain) F. Gunsing (CEA, France) F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany) R. Nolte (PTB, Germany) S. Pomp (TSL UU, Sweden) & Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuel Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis; Workshop Assistant: Geraldine Jean

  6. Development of radiation tolerant components for the Quench Protection System at CERN

    NASA Astrophysics Data System (ADS)

    Bitterling, O.; Denz, R.; Steckert, J.; Uznanski, S.

    2016-01-01

    This paper describes the results of irradiation campaigns with the high resolution Analog to Digital Converter (ADC) ADS1281. This ADC will be used as part of a revised quench detection circuit for the 600 A corrector magnets at the CERN Large Hadron Collider (LHC) . To verify the radiation tolerance of the ADC an irradiation campaign using a proton beam, applying doses up to 3,4 kGy was conducted. The resulting data and an analysis of the found failure modes is discussed in this paper. Several mitigation measures are described that allow to reduce the error rate to levels acceptable for operation as part of the LHC QPS.

  7. Search for the 1P 1 charmonium state in overlinepp annihilations at the CERN intersecting storage rings

    NASA Astrophysics Data System (ADS)

    Baglin, C.; Baird, S.; Bassompierre, G.; Borreani, G.; Brient, J.-C.; Broll, C.; Brom, J.-M.; Bugge, L.; Buran, T.; Burq, J.-P.; Bussière, A.; Buzzo, A.; Cester, R.; Chemarin, M.; Chevallier, M.; Escoubes, B.; Fay, J.; Ferroni, S.; Gracco, V.; Guillaud, J.-P.; Khan-Aronsen, E.; Kirsebom, K.; Kylling, A.; Ille, B.; Lambert, M.; Leistam, L.; Lundby, A.; Macri, M.; Marchetto, F.; Menichetti, E.; Mörch, Ch.; Mouellic, B.; Olsen, D.; Pastrone, N.; Petrillo, L.; Pia, M. G.; Poole, J.; Poulet, M.; Rinaudo, G.; Santroni, A.; Severi, M.; Skjevling, G.; Stapnes, S.; Stugu, B.; R704 Collaboration

    1986-04-01

    This experiment has been performed at the CERN Intersecting Storage Rings to study the direct formation of charmonium states in antiproton-proton annihilations. The experimental program has partly been devoted to an inclusive scan for overlinepp → J/ψ + X in the range 3520-3530 MeV/ c2. A cluster of five events has been observed in a narrow energy band, centred on the centre of gravity of the 3P J states where the 1P 1 is expected to be. When interpreted as a new resonace, these data yield a mass m = 3525.4±0.8 MeV/ c2.

  8. The accuracy of the ATLAS muon X-ray tomograph

    NASA Astrophysics Data System (ADS)

    Avramidou, R.; Berbiers, J.; Boudineau, C.; Dechelette, C.; Drakoulakos, D.; Fabjan, C.; Grau, S.; Gschwendtner, E.; Maugain, J.-M.; Rieder, H.; Rangod, S.; Rohrbach, F.; Sbrissa, E.; Sedykh, E.; Sedykh, I.; Smirnov, Y.; Vertogradov, L.; Vichou, I.

    2003-01-01

    A gigantic detector, the ATLAS project, is under construction at CERN for particle physics research at the Large Hadron Collider which is to be ready by 2006. An X-ray tomograph has been developed, designed and constructed at CERN in order to control the mechanical quality of the ATLAS muon chambers. We reached a measurement accuracy of 2 μm systematic and 2 μm statistical uncertainties in the horizontal and vertical directions in the working area 220 cm (horizontal)×60 cm (vertical). Here we describe in detail the fundamental approach of the basic principle chosen to achieve such good accuracy. In order to crosscheck our precision, key results of measurements are presented.

  9. Brightness and uniformity measurements of plastic scintillator tiles at the CERN H2 test beam

    NASA Astrophysics Data System (ADS)

    Chatrchyan, S.; Sirunyan, A. M.; Tumasyan, A.; Litomin, A.; Mossolov, V.; Shumeiko, N.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Spilbeeck, A.; Alves, G. A.; Aldá Júnior, W. L.; Hensel, C.; Carvalho, W.; Chinellato, J.; De Oliveira Martins, C.; Matos Figueiredo, D.; Mora Herrera, C.; Nogima, H.; Prado Da Silva, W. L.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Finger, M.; Finger, M., Jr.; Kveton, A.; Tomsa, J.; Adamov, G.; Tsamalaidze, Z.; Behrens, U.; Borras, K.; Campbell, A.; Costanza, F.; Gunnellini, P.; Lobanov, A.; Melzer-Pellmann, I.-A.; Muhl, C.; Roland, B.; Sahin, M.; Saxena, P.; Hegde, V.; Kothekar, K.; Pandey, S.; Sharma, S.; Beri, S. B.; Bhawandeep, B.; Chawla, R.; Kalsi, A.; Kaur, A.; Kaur, M.; Walia, G.; Bhattacharya, S.; Ghosh, S.; Nandan, S.; Purohit, A.; Sharan, M.; Banerjee, S.; Bhattacharya, S.; Chatterjee, S.; Das, P.; Guchait, M.; Jain, S.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Patil, M.; Sarkar, T.; Juodagalvis, A.; Afanasiev, S.; Bunin, P.; Ershov, Y.; Golutvin, I.; Malakhov, A.; Moisenz, P.; Smirnov, V.; Zarubin, A.; Chadeeva, M.; Chistov, R.; Danilov, M.; Popova, E.; Rusinov, V.; Andreev, Yu.; Dermenev, A.; Karneyeu, A.; Krasnikov, N.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Toms, M.; Zhokin, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Kaminskiy, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Terkulov, A.; Bitioukov, S.; Elumakhov, D.; Kalinin, A.; Krychkine, V.; Mandrik, P.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Volkov, A.; Sekmen, S.; Medvedeva, T.; Rumerio, P.; Adiguzel, A.; Bakirci, N.; Boran, F.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dölek, F.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Işik, C.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Tok, U. G.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Isildak, B.; Karapinar, G.; Murat Guler, A.; Ocalan, K.; Yalvac, M.; Zeyrek, M.; Atakisi, I. O.; Gülmez, E.; Kaya, M.; Kaya, O.; Koseyan, O. K.; Ozcelik, O.; Ozkorucuklu, S.; Tekten, S.; Yetkin, E. A.; Yetkin, T.; Cankocak, K.; Sen, S.; Boyarintsev, A.; Grynyov, B.; Levchuk, L.; Popov, V.; Sorokin, P.; Flacher, H.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Buccilli, A.; Cooper, S. I.; Henderson, C.; West, C.; Arcaro, D.; Gastler, D.; Hazen, E.; Rohlf, J.; Sulak, L.; Wu, S.; Zou, D.; Hakala, J.; Heintz, U.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Mao, Z.; Yu, D. R.; Gary, J. W.; Ghiasi Shirazi, S. M.; Lacroix, F.; Long, O. R.; Wei, H.; Bhandari, R.; Heller, R.; Stuart, D.; Yoo, J. H.; Chen, Y.; Duarte, J.; Lawhorn, J. M.; Nguyen, T.; Spiropulu, M.; Winn, D.; Abdullin, S.; Apresyan, A.; Apyan, A.; Banerjee, S.; Chlebana, F.; Freeman, J.; Green, D.; Hare, D.; Hirschauer, J.; Joshi, U.; Lincoln, D.; Los, S.; Pedro, K.; Spalding, W. J.; Strobbe, N.; Tkaczyk, S.; Whitbeck, A.; Linn, S.; Markowitz, P.; Martinez, G.; Bertoldi, M.; Hagopian, S.; Hagopian, V.; Kolberg, T.; Baarmand, M. M.; Noonan, D.; Roy, T.; Yumiceva, F.; Bilki, B.; Clarida, W.; Debbins, P.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Miller, M.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Schmidt, I.; Snyder, C.; Southwick, D.; Tiras, E.; Yi, K.; Al-bataineh, A.; Bowen, J.; Castle, J.; McBrayer, W.; Murray, M.; Wang, Q.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Baden, A.; Belloni, A.; Calderon, J. D.; Eno, S. C.; Feng, Y. B.; Ferraioli, C.; Grassi, T.; Hadley, N. J.; Jeng, G.-Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Yang, Z. S.; Yao, Y.; Brandt, S.; D'Alfonso, M.; Hu, M.; Klute, M.; Niu, X.; Chatterjee, R. M.; Evans, A.; Frahm, E.; Kubota, Y.; Lesko, Z.; Mans, J.; Ruckstuhl, N.; Heering, A.; Karmgard, D. J.; Musienko, Y.; Ruchti, R.; Wayne, M.; Benaglia, A. D.; Mei, K.; Tully, C.; Bodek, A.; de Barbaro, P.; Galanti, M.; Garcia-Bellido, A.; Khukhunaishvili, A.; Lo, K. H.; Vishnevskiy, D.; Zielinski, M.; Agapitos, A.; Amouzegar, M.; Chou, J. P.; Hughes, E.; Saka, H.; Sheffield, D.; Akchurin, N.; Damgov, J.; De Guio, F.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Mengke, T.; Muthumuni, S.; Undleeb, S.; Volobouev, I.; Wang, Z.; Goadhouse, S.; Hirosky, R.; Wang, Y.

    2018-01-01

    We study the light output, light collection efficiency and signal timing of a variety of organic scintillators that are being considered for the upgrade of the hadronic calorimeter of the CMS detector. The experimental data are collected at the H2 test-beam area at CERN, using a 150 GeV muon beam. In particular, we investigate the usage of over-doped and green-emitting plastic scintillators, two solutions that have not been extensively considered. We present a study of the energy distribution in plastic-scintillator tiles, the hit efficiency as a function of the hit position, and a study of the signal timing for blue and green scintillators.

  10. Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-12-01

    The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.

  11. Scaling Agile Infrastructure to People

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  12. Hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.-H.; Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100080; Wang, J.-X.

    2004-12-01

    Adopting the complete {alpha}{sub s}{sup 4} approach of the perturbative QCD and the updated parton distribution functions, we have estimated the hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*). In the estimate, special care has been paid to the dependence of the production amplitude on the derivative of the wave function at origin which is obtained by the potential model. For experimental references, main theoretical uncertainties are discussed, and the total cross section as well as the distributions of the production with reasonable cuts at the energies of Tevatron and CERN LHC are computed and presented properly.more » The results show that the P-wave production may contribute to the B{sub c}-meson production indirectly by a factor of about 0.5 of the direct production, and according to the estimated cross section, it is further worthwhile to study the possibility of observing the P-wave production itself experimentally.« less

  13. Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H.; Benoit, M.; Chen, H.

    High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in themore » testbeam of the HV-CMOS sensor AMS180v4 at CERN. Preliminary results have shown that the test system is very versatile. In conclusion, further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.« less

  14. Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade

    DOE PAGES

    Liu, H.; Benoit, M.; Chen, H.; ...

    2017-01-11

    High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in themore » testbeam of the HV-CMOS sensor AMS180v4 at CERN. Preliminary results have shown that the test system is very versatile. In conclusion, further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.« less

  15. The readout chain for the bar PANDA MVD strip detector

    NASA Astrophysics Data System (ADS)

    Schnell, R.; Brinkmann, K.-Th.; Di Pietro, V.; Kleines, H.; Goerres, A.; Riccardi, A.; Rivetti, A.; Rolo, M. D.; Sohlbach, H.; Zaunick, H.-G.

    2015-02-01

    The bar PANDA (antiProton ANnihilation at DArmstadt) experiment will study the strong interaction in annihilation reactions between an antiproton beam and a stationary gas jet target. The detector will comprise different sub-detectors for tracking, particle identification and calorimetry. The Micro-Vertex Detector (MVD) as the innermost part of the tracking system will allow precise tracking and detection of secondary vertices. For the readout of the double-sided silicon strip sensors a custom-made ASIC is being developed, employing the Time-over-Threshold (ToT) technique for digitization and utilize time-to-digital converters (TDC) to provide a high-precision time stamp of the hit. A custom-made Module Data Concentrator ASIC (MDC) will multiplex the data of all front-ends of one sensor towards the CERN-developed GBT chip set (GigaBit Transceiver). The MicroTCA-based MVD Multiplexer Board (MMB) at the off-detector site will receive and concentrate the data from the GBT links and transfer it to FPGA-based compute nodes for global event building.

  16. NaNet: a configurable NIC bridging the gap between HPC and real-time HEP GPU computing

    NASA Astrophysics Data System (ADS)

    Lonardo, A.; Ameli, F.; Ammendola, R.; Biagioni, A.; Cotta Ramusino, A.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Pontisso, L.; Rossetti, D.; Simeone, F.; Simula, F.; Sozzi, M.; Tosoratto, L.; Vicini, P.

    2015-04-01

    NaNet is a FPGA-based PCIe Network Interface Card (NIC) design with GPUDirect and Remote Direct Memory Access (RDMA) capabilities featuring a configurable and extensible set of network channels. The design currently supports both standard—Gbe (1000BASE-T) and 10GbE (10Base-R)—and custom—34 Gbps APElink and 2.5 Gbps deterministic latency KM3link—channels, but its modularity allows for straightforward inclusion of other link technologies. The GPUDirect feature combined with a transport layer offload module and a data stream processing stage makes NaNet a low-latency NIC suitable for real-time GPU processing. In this paper we describe the NaNet architecture and its performances, exhibiting two of its use cases: the GPU-based low-level trigger for the RICH detector in the NA62 experiment at CERN and the on-/off-shore data transport system for the KM3NeT-IT underwater neutrino telescope.

  17. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  18. A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.

    PubMed

    Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa

    2017-04-01

    Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Controlled longitudinal emittance blow-up using band-limited phase noise in CERN PSB

    NASA Astrophysics Data System (ADS)

    Quartullo, D.; Shaposhnikova, E.; Timko, H.

    2017-07-01

    Controlled longitudinal emittance blow-up (from 1 eVs to 1.4 eVs) for LHC beams in the CERN PS Booster is currently achievied using sinusoidal phase modulation of a dedicated high-harmonic RF system. In 2021, after the LHC injectors upgrade, 3 eVs should be extracted to the PS. Even if the current method may satisfy the new requirements, it relies on low-power level RF improvements. In this paper another method of blow-up was considered, that is the injection of band-limited phase noise in the main RF system (h=1), never tried in PSB but already used in CERN SPS and LHC, under different conditions (longer cycles). This technique, which lowers the peak line density and therefore the impact of intensity effects in the PSB and the PS, can also be complementary to the present method. The longitudinal space charge, dominant in the PSB, causes significant synchrotron frequency shifts with intensity, and its effect should be taken into account. Another complication arises from the interaction of the phase loop with the injected noise, since both act on the RF phase. All these elements were studied in simulations of the PSB cycle with the BLonD code, and the required blow-up was achieved.

  20. New vertical cryostat for the high field superconducting magnet test station at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vande Craen, A.; Atieh, S.; Bajko, M.

    2014-01-29

    In the framework of the R and D program for new superconducting magnets for the Large Hadron Collider accelerator upgrades, CERN is building a new vertical test station to test high field superconducting magnets of unprecedented large size. This facility will allow testing of magnets by vertical insertion in a pressurized liquid helium bath, cooled to a controlled temperature between 4.2 K and 1.9 K. The dimensions of the cryostat will allow testing magnets of up to 2.5 m in length with a maximum diameter of 1.5 m and a mass of 15 tons. To allow for a faster insertionmore » and removal of the magnets and reducing the risk of helium leaks, all cryogenics supply lines are foreseen to remain permanently connected to the cryostat. A specifically designed 100 W heat exchanger is integrated in the cryostat helium vessel for a controlled cooling of the magnet from 4.2 K down to 1.9 K in a 3 m{sup 3} helium bath. This paper describes the cryostat and its main functions, focusing on features specifically developed for this project. The status of the construction and the plans for assembly and installation at CERN are also presented.« less

  1. Common HEP UNIX Environment

    NASA Astrophysics Data System (ADS)

    Taddei, Arnaud

    After it had been decided to design a common user environment for UNIX platforms among HEP laboratories, a joint project between DESY and CERN had been started. The project consists in 2 phases: 1. Provide a common user environment at shell level, 2. Provide a common user environment at graphical level (X11). Phase 1 is in production at DESY and at CERN as well as at PISA and RAL. It has been developed around the scripts originally designed at DESY Zeuthen improved and extended with a 2 months project at CERN with a contribution from DESY Hamburg. It consists of a set of files which are customizing the environment for the 6 main shells (sh, csh, ksh, bash, tcsh, zsh) on the main platforms (AIX, HP-UX, IRIX, SunOS, Solaris 2, OSF/1, ULTRIX, etc.) and it is divided at several "sociological" levels: HEP, site, machine, cluster, group of users and user with some levels which are optional. The second phase is under design and a first proposal has been published. A first version of the phase 2 exists already for AIX and Solaris, and it should be available for all other platforms, by the time of the conference. This is a major collective work between several HEP laboratories involved in the HEPiX-scripts and HEPiX-X11 working-groups.

  2. Virtuality and efficiency - overcoming past antinomy in the remote collaboration experience

    NASA Astrophysics Data System (ADS)

    Fernandes, Joao; Bjorkli, Knut; Clavo, David Martin; Baron, Thomas

    2010-04-01

    Several recent initiatives have been put in place by the CERN IT Department to improve the user experience in remote dispersed meetings and remote collaboration at large in the LHC communities worldwide. We will present an analysis of the factors which were historically limiting the efficiency of remote dispersed meetings and describe the consequent actions which were undertaken at CERN to overcome these limitations. After giving a status update of the different equipment available at CERN to enable the virtual sessions and the various collaborative tools which are currently proposed to users, we will focus on the evolution of this market: how can the new technological trends (among others, HD videoconferencing, Telepresence, Unified Communications, etc.) impact positively the user experience and how to attain the best usage of them. Finally, by projecting ourselves in the future, we will give some hints as to how to answer the difficult question of selecting the next generation of collaborative tools: which set of tools among the various offers (systems like Vidyo H264 SVC, next generation EVO, Groupware offers, standard H323 systems, etc.) is best suited for our environment and how to unify this set for the common user. This will finally allow us to definitively overcome the past antinomy between virtuality and efficiency.

  3. Raymond Stora's obituary

    NASA Astrophysics Data System (ADS)

    Becchi, C.

    2015-10-01

    On Monday, July 20, 2015 Raymond Stora passed away; although he was seriously ill, his death was unexpected, the result of a sudden heart attack. Raymond was born on September 18, 1930. He had been sick for many months, yet continued to go to CERN where he was able to discuss the problems in physics and mathematics that interested him. In fact, his last publication (recorded on SPIRES) carries the date of December 2014, just before he contracted pneumonia, which dramatically reduced his mobility and hence the possibility of going to CERN. Still, this last project revived Raymond's interest in algebraic curves, and he spent a large part of his last months at home reading papers and books on this subject. In 2013, despite the large amount of time that his various therapies required, Raymond made a fundamental contribution to a difficult problem on renormalization in configuration space based on the subtle technical properties of homogeneous distributions. His knowledge of physics and, in particular, of quantum field theory, as well as of many fields of mathematics was so well known that many members of and visitors to CERN frequently asked Raymond for advice and assistance, which he gave with great enthusiasm and in the most gracious way. Ivan Todorov, commenting on Raymond's death, noted that we must remember Raymond's remarkable qualities, which were both human and scientific.

  4. DNS load balancing in the CERN cloud

    NASA Astrophysics Data System (ADS)

    Reguero Naredo, Ignacio; Lobato Pardavila, Lorena

    2017-10-01

    Load Balancing is one of the technologies enabling deployment of large-scale applications on cloud resources. A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence. It currently serves over 450 load-balanced aliases with two small VMs acting as master and slave. The aliases are mapped to DNS subdomains. These subdomains are managed with DDNS according to a load metric, which is collected from the alias member nodes with SNMP. During the last years, several improvements were brought to the software, for instance: support for IPv6, parallelization of the status requests, implementing the client in Python to allow for multiple aliases with differentiated states on the same machine or support for application state. The configuration of the Load Balancer is currently managed by a Puppet type. It discovers the alias member nodes and gets the alias definitions from the Ermis REST service. The Aiermis self-service GUI for the management of the LB aliases has been produced and is based on the Ermis service above that implements a form of Load Balancing as a Service (LBaaS). The Ermis REST API has authorisation based in Foreman hostgroups. The CERN DNS LBD is Open Software with Apache 2 license.

  5. Feasibility study for a biomedical experimental facility based on LEIR at CERN.

    PubMed

    Abler, Daniel; Garonna, Adriano; Carli, Christian; Dosanjh, Manjit; Peach, Ken

    2013-07-01

    In light of the recent European developments in ion beam therapy, there is a strong interest from the biomedical research community to have more access to clinically relevant beams. Beamtime for pre-clinical studies is currently very limited and a new dedicated facility would allow extensive research into the radiobiological mechanisms of ion beam radiation and the development of more refined techniques of dosimetry and imaging. This basic research would support the current clinical efforts of the new treatment centres in Europe (for example HIT, CNAO and MedAustron). This paper presents first investigations on the feasibility of an experimental biomedical facility based on the CERN Low Energy Ion Ring LEIR accelerator. Such a new facility could provide beams of light ions (from protons to neon ions) in a collaborative and cost-effective way, since it would rely partly on CERN's competences and infrastructure. The main technical challenges linked to the implementation of a slow extraction scheme for LEIR and to the design of the experimental beamlines are described and first solutions presented. These include introducing new extraction septa into one of the straight sections of the synchrotron, changing the power supply configuration of the magnets, and designing a new horizontal beamline suitable for clinical beam energies, and a low-energy vertical beamline for particular radiobiological experiments.

  6. Feasibility study for a biomedical experimental facility based on LEIR at CERN

    PubMed Central

    Abler, Daniel; Garonna, Adriano; Carli, Christian; Dosanjh, Manjit; Peach, Ken

    2013-01-01

    In light of the recent European developments in ion beam therapy, there is a strong interest from the biomedical research community to have more access to clinically relevant beams. Beamtime for pre-clinical studies is currently very limited and a new dedicated facility would allow extensive research into the radiobiological mechanisms of ion beam radiation and the development of more refined techniques of dosimetry and imaging. This basic research would support the current clinical efforts of the new treatment centres in Europe (for example HIT, CNAO and MedAustron). This paper presents first investigations on the feasibility of an experimental biomedical facility based on the CERN Low Energy Ion Ring LEIR accelerator. Such a new facility could provide beams of light ions (from protons to neon ions) in a collaborative and cost-effective way, since it would rely partly on CERN's competences and infrastructure. The main technical challenges linked to the implementation of a slow extraction scheme for LEIR and to the design of the experimental beamlines are described and first solutions presented. These include introducing new extraction septa into one of the straight sections of the synchrotron, changing the power supply configuration of the magnets, and designing a new horizontal beamline suitable for clinical beam energies, and a low-energy vertical beamline for particular radiobiological experiments. PMID:23824122

  7. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  8. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  9. Challenges in coupling LTER with environmental assessments: An insight from potential and reality of the Chinese Ecological Research Network in servicing environment assessments.

    PubMed

    Xia, Shaoxia; Liu, Yu; Yu, Xiubo; Fu, Bojie

    2018-08-15

    Environmental assessments estimate, evaluate and predict the consequences of natural processes and human activities on the environment. Long-term ecosystem observation and research networks (LTERs) are potentially valuable infrastructure to support environmental assessments. However, very few environmental assessments have successfully incorporated them. In this study, we try to reveal the current status of coupling LTERs with environmental assessments and look at the challenges involved in improving this coupling through exploring the role that Chinese Ecological Research Network (CERN), the LTER of China, currently plays in regional environment assessments. A review of official protocols and standards, regional assessments and CERN researches related to ecosystems and environment shows that there is great potential for coupling CERN with environment assessments. However in practice, CERN does not currently play the expected role. Remote sensing and irregular inventory data are still the main data sources currently used in regional assessments. Several causes led to the present situation: (1) insufficient cross-site research and failure to scale up site-level variables to the regional scale; (2) data barriers resulting from incompatible protocols and low data usability due to lack of data assimilation and scaling; and (3) absence of indicators relevant to human activities in existing monitoring protocols. For these reasons, enhancing cross-site monitoring and research, data assimilation and scaling up are critical steps required to improve coupling of LTER with environmental assessments. Site-focused long-term monitoring should be combined with wide-scale ground surveys and remote sensing to establish an effective connection between different environmental monitoring platforms for regional assessments. It is also necessary to revise the current monitoring protocols to include human activities and their impacts on the ecosystem, or change the LTERs into Long-Term Socio-Ecological Research (LTSER) networks. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Impact of detector simulation in particle physics collider experiments

    NASA Astrophysics Data System (ADS)

    Daniel Elvira, V.

    2017-06-01

    Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.

  11. O8.10A MODEL FOR RESEARCH INITIATIVES FOR RARE CANCERS: THE COLLABORATIVE EPENDYMOMA RESEARCH NETWORK (CERN)

    PubMed Central

    Armstrong, T.S.; Aldape, K.; Gajjar, A.; Haynes, C.; Hirakawa, D.; Gilbertson, R.; Gilbert, M.R.

    2014-01-01

    Ependymoma represents less than 5% of adult central nervous system (CNS) tumors and a higher percentage of pediatric CNS tumors, but it remains an orphan disease. The majority of the laboratory-based research and clinical trials have been conducted in the pediatric setting, a reflection of the relative incidence and funding opportunities. CERN, created in 2006, was designed to establish a collaborative effort between laboratory and clinical research and pediatric and adult investigators. The organization of CERN is based on integration and collaboration among five projects. Project 1 contains the clinical trials network encompassing both adult and pediatric centers. This group has completed 2 clinical trials with more underway. Project 2 is focused on molecular classification of human ependymoma tumor tissues and also contains the tumor repository which has now collected over 600 fully clinically annotated CNS ependymomas from adults and children. Project 3 is focused on drug discovery utilizing robust laboratory models of ependymoma to perform high throughput screening of drug libraries, then taking promising agents through extensive preclinical testing including monitoring of drug delivery to tumor using state of the art microdialysis. Project 4 contains the basic research efforts evaluating the molecular pathogenesis of ependymoma and has successfully translated these findings by generating the first mouse models of ependymoma that are employed in preclinical drug development in Project 3. Project 5 studies patient outcomes, including the incorporation of these measures in the clinical trials. This project also contains an online Ependymoma Outcomes survey, collecting data on the consequences of the disease and its treatment. These projects have been highly successful and collaborative. For example, the serial measurement of symptom burden (Project 5) has greatly contributed to the evaluation of treatment efficacy of a clinical trial (Project 1) and investigators from Project 2 are evaluating potential predictive markers from tumor tissue from the same clinical trial. Results from genomic and molecular discoveries generated by Project 4 were evaluated using the clinical material from the Tumor Registry (Project 2). Agents identified from the high throughput screening in Project 3 are being used to create novel clinical trials (Project 1). As a complimentary effort, CERN's community outreach efforts provide a major gateway to patients, families, caregivers and healthcare providers, contributing to greater awareness of ependymoma, and supporting clinical trial accrual in Project 1. In summary, CERN has successfully created a collaborative, multi-national integrated effort combining pediatric- and adult-focused investigators spanning from basic science to patient outcomes measures. This research paradigm may be an effective approach for other rare cancers.

  12. NEWS: A trip to CERN

    NASA Astrophysics Data System (ADS)

    Ellison, A. D.

    2000-07-01

    Two years ago John Kinchin and myself were lucky enough to attend the Goldsmith's particle physics course. As well as many interesting lectures and activities, this course included a visit to CERN. To most physics teachers CERN is Mecca, a hallowed place where gods manipulate and manufacture matter. The experience of being there was even better. Alison Wright was an enthusiastic and very knowledgeable host who ensured the visit went smoothly and we all learned a lot. While we were there, John and I discussed the possibility of bringing a party of A-level students to see real physics in action. In February of this year we managed it. 33 students from two schools, Boston Grammar School and Northampton School for Boys, and four staff left England and caught the 2 am ferry to France. Many hours and a few `short cuts' later we arrived at our hotel in St Genis, not far from CERN. The first day was spent sight-seeing in Lausanne and Geneva. The Olympic museum in Lausanne is well worth a visit. Unfortunately, the famous fountain in Geneva was turned off, but then you can't have everything. The following morning we turned up at CERN late due to the coach's brakes being iced up! We were met once again by Alison Wright who forgave us and introduced the visit by giving an excellent talk on CERN, its background and its reason for existing. At this point we met another member of our Goldsmith's course and his students so we joined forces once again. We then piled back into the coach to re-cross the border and visit ALEPH. ALEPH is a monster of a detector 150 m below ground. We divided into four groups, each with a very able and knowledgeable guide, and toured the site. The size and scale of the detector are awesome and the students were suitably impressed. We repeated the speed of sound experiment of two years ago at the bottom of a 150 m concrete shaft (320 m s-1), posed for a group photo in front of the detector (figure 1) and returned to the main site for lunch in the canteen. Over lunch we mixed with physicists of many different nationalities and backgrounds. Figure 1 Figure 1. In the afternoon we visited Microcosm, the CERN visitors centre, and the LEP control room and also the SPS. Here the students learned new applications for much of the physics of standing waves and resonance that they had been taught in the classroom. Later that night, we visited a bowling alley where momentum and collision theory were put into practice. The following morning we returned to CERN and visited the large magnet testing facility. Here again physics was brought to life. We saw superconducting magnets being assembled and tested and the students gained a real appreciation of the problems and principles involved. The afternoon was rounded off by a visit to a science museum in Geneva - well worth a visit, as some of us still use some of the apparatus on display. Friday was our last full day so we visited Chamonix in the northern Alps. In the morning, we ascended the Aiguille de Midi - by cable car. Twenty minutes and 3842 m later we emerged into 50 km h-1 winds and -10 °C temperature, not counting the -10 °C wind chill factor. A crisp packet provided an unusual demonstration of the effects of air pressure (figure 2). Figure 2 Figure 2. The views from the summit were very spectacular though a few people experienced mild altitude sickness. That afternoon the party went to the Mer de Glace. Being inside a 3 million year-old structure moving down a mountain at 3 cm per day was an interesting experience, as was a tot of whisky with 3 million year-old water. Once again the local scenery was very photogenic and the click and whirr of cameras was a constant background noise. Saturday morning saw an early start for the long drive home. Most students - and some staff - took the opportunity to catch up on their sleep. Thanks are due to many people without whom the trip would never have taken place. Anne Craige, Stuart Williams, Christine Sutton and Andrew Morrison of PPARC, but most especially Alison Wright of CERN and John Kinchin of Boston Grammar School who did all the hard work and organization. The week gave students a unique chance to see the principles of physics being applied in many different ways and I am sure this has reinforced their knowledge and understanding. Some students also took the opportunity to practise their language skills. The only remaining question is: what next? I'll have to think about it in the summer when I have some slack time. Hmm, SLAC, that gives me an idea....

  13. Physics in the Spotlight

    NASA Astrophysics Data System (ADS)

    2000-10-01

    CERN, ESA and ESO Put Physics On Stage [1] Summary Can you imagine how much physics is in a simple match of ping-pong, in throwing a boomerang, or in a musical concert? Physics is all around us and governs our lives. The World-Wide Web and mobile communication are only two examples of technologies that have rapidly found their way from science into the everyday life. [Go to Physics On Stage Website at CERN] But who is going to maintain these technologies and develop new ones in the future? Probably not young Europeans, as recent surveys show a frightening decline of interest in physics and technology among Europe's citizens, especially schoolchildren. Fewer and fewer young people enrol in physics courses at university. The project "Physics on Stage" tackles this problem head on. An international festival of 400 physics educators from 22 European countries [2] gather at CERN in Geneva from 6 to 10 November to show how fascinating and entertaining physics can be . In a week-long event innovative methods of teaching physics and demonstrations of the fun that lies in physics are presented in a fair, in 10 spectacular performances, and presentations. Workshops on 14 key themes will give the delegates - teachers, professors, artists and other physics educators - the chance to discuss and come up with solutions for the worrying situation of disenchantment with Science in Europe. The European Science and Technology Week 2000 "Physics on Stage" is a joint project organised by the European Organisation for Nuclear Research (CERN) , the European Space Agency (ESA) and the European Southern Observatory (ESO) , Europe's leading physics research organisations. This is the first time that these three organisations have worked together in such close collaboration to catalyse a change in attitude towards science and technology education. Physics on Stage is funded in part by the European Commission and happens as an event in the European Science and Technology Week 2000, an initiative of the EC to raise public awareness of science and technology. Other partners are the European Physical Society (EPS) and the European Association for Astronomy Education (EAAE). European Commissioner Busquin to Visit Physics On Stage On Thursday, November 9, Philippe Busquin , Commissioner for Research, European Commission, Prof. Luciano Maiani , Director-General of CERN, Antonio Rodota , Director-General of ESA, Dr. Catherine Cesarsky , Director-General of ESO, and Dr. Achilleas Mitsos , Director-General of the Research DG in the European Commission, will participate in the activities of the Physics on Stage Festival. On this occasion, Commissioner Busquin will address conference delegates and the Media on the importance of Science and of innovative science and technology education. The Festival Each of the more than 400 delegates of the festival has been selected during the course of the year by committees in each of the 22 countries for outstanding projects promoting science. For example, a group of Irish physics teachers and their students will give a concert on instruments made exclusively of plumbing material, explaining the physics of sound at the same time. A professional theatre company from Switzerland stages a play on antimatter. Or two young Germans invite spectators to their interactive physics show where they juggle, eat fire and perform stunning physics experiments on stage. The colourful centrepiece of this week is the Physics Fair. Every country has its own stands where delegates show their projects, programmes or experiments and gain inspiration from the exhibits from other countries. Physics on Stage is a unique event. Nothing like it has ever happened in terms of international exchange, international collaboration and state of the art science and technology education methods. The Nobel prizewinners of 2030 are at school today. What ideas can Europe's teachers put forward to boost their interest in science? An invitation to the media We invite journalists to take part in this both politically and visually interesting event. We expect many useful results from this exchange of experience, there will a large choice of potential interview partners and of course uncountable images and impressions. Please fill in the form below and fax it back to CERN under +41 22 7850247. Go to the Webpage http://www.cern.ch/pos to find out all about Physics on Stage Festival at CERN. The main "Physics on Stage" web address is: http://www.estec.esa.nl/outreach/pos There is also a Physics On Stage webpage at ESO Notes [1] This is a joint Press Release by the European Organization for Nuclear Research (CERN) , the European Space Agency (ESA) and the European Southern Observatory (ESO). [2] The 22 countries are the member countries of at least one of the participating organisations or the European Union: Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, United Kingdom.

  14. Muon Bundles as a Sign of Strangelets from the Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kankiewicz, P.; Rybczyński, M.; Włodarczyk, Z.

    Recently, the CERN ALICE experiment observed muon bundles of very high multiplicities in its dedicated cosmic ray (CR) run, thereby confirming similar findings from the LEP era at CERN (in the CosmoLEP project). Originally, it was argued that they apparently stem from the primary CRs with a heavy masses. We propose an alternative possibility arguing that muonic bundles of highest multiplicity are produced by strangelets, hypothetical stable lumps of strange quark matter infiltrating our universe. We also address the possibility of additionally deducing their directionality which could be of astrophysical interest. Significant evidence for anisotropy of arrival directions of themore » observed high-multiplicity muonic bundles is found. Estimated directionality suggests their possible extragalactic provenance.« less

  15. Path to AWAKE: Evolution of the concept

    DOE PAGES

    Caldwell, A.; Adli, E.; Amorim, L.; ...

    2016-01-02

    This study describes the conceptual steps in reaching the design of the AWAKE experiment currently under construction at CERN. We start with an introduction to plasma wakefield acceleration and the motivation for using proton drivers. We then describe the self-modulation instability – a key to an early realization of the concept. This is then followed by the historical development of the experimental design, where the critical issues that arose and their solutions are described. We conclude with the design of the experiment as it is being realized at CERN and some words on the future outlook. A summary of themore » AWAKE design and construction status as presented in this conference is given in Gschwendtner et al. [1] .« less

  16. The beam and detector of the NA62 experiment at CERN

    DOE PAGES

    Gil, E. Cortina; Albarrán, E. Martín; Minucci, E.; ...

    2017-05-31

    NA62 is a fixed-target experiment at the CERN SPS dedicated to measurements of rare kaon decays. Such measurements, like the branching fraction of the K + → π + ν ν¯ decay, have the potential to bring significant insights into new physics processes when comparison is made with precise theoretical predictions. For this purpose, innovative techniques have been developed, in particular, in the domain of low-mass tracking devices. Detector construction spanned several years from 2009 to 2014. The collaboration started detector commissioning in 2014 and will collect data until the end of 2018. Furthermore, the beam line and detector componentsmore » are described together with their early performance obtained from 2014 and 2015 data.« less

  17. Ian Hinchliffe Answers Your Higgs Boson Questions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinchliffe, Ian

    contingent with the ATLAS experiment at CERN, answers many of your questions about the Higgs boson. Ian invited viewers to send in questions about the Higgs via email, Twitter, Facebook, or YouTube in an "Ask a Scientist" video posted July 3: http://youtu.be/xhuA3wCg06s CERN's July 4 announcement that the ATLAS and CMS experiments at the Large Hadron Collider have discovered a particle "consistent with the Higgs boson" has raised questions about what scientists have found and what still remains to be found -- and what it all means. If you have suggestions for future "Ask a Scientist" videos, post them belowmore » or send ideas to askascientist@lbl.gov« less

  18. Study of muon-induced neutron production using accelerator muon beam at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, Y.; Lin, C. J.; Ochoa-Ricoux, J. P.

    2015-08-17

    Cosmogenic muon-induced neutrons are one of the most problematic backgrounds for various underground experiments for rare event searches. In order to accurately understand such backgrounds, experimental data with high-statistics and well-controlled systematics is essential. We performed a test experiment to measure muon-induced neutron production yield and energy spectrum using a high-energy accelerator muon beam at CERN. We successfully observed neutrons from 160 GeV/c muon interaction on lead, and measured kinetic energy distributions for various production angles. Works towards evaluation of absolute neutron production yield is underway. This work also demonstrates that the setup is feasible for a future large-scale experimentmore » for more comprehensive study of muon-induced neutron production.« less

  19. Final Scientific EFNUDAT Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garbil, Roger

    2010-11-09

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.eu. Topics of interest include: Data evaluation; Cross section measurements; Experimental techniques; Uncertainties and covariances; Fission properties; Current and future facilities. International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden). Workshop Organizing Committee: Enrico Chiaveri (Chairman); Marco Calviani; Samuel Andriamonje; Eric Berthoumieux; Carlos Guerrero; Robertomore » Losito; Vasilis Vlachoudis; Workshop Assistant: Geraldine Jean« less

  20. Final Scientific EFNUDAT Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, J.N.

    2010-11-09

    Part 7 of The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive.EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluation; Cross section measurements; Experimental techniques; Uncertainties and covariances; Fission properties; Current and future facilities;International Advisory Committee: C. Barreau (CENBG, France) T. Belgya (IKI KFKI, Hungary) E. Gonzalez (CIEMAT, Spain) F. Gunsing (CEA, France) F.-J. Hambsch (IRMM, Belgium) A. Junghans (FZD, Germany) R. Nolte (PTB, Germany) S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman) Marco Calvianimore » Samuel Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis Workshop Assistant: Geraldine Jean.« less

  1. Final Scientific EFNUDAT Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lantz, Mattias; Neudecker, Denise

    2010-11-09

    Part 5 of The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluation Cross section measurements Experimental techniques Uncertainties and covariances Fission properties Current and future facilities International Advisory Committee: C. Barreau (CENBG, France) T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain) F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium) A. Junghans (FZD, Germany) R. Nolte (PTB, Germany) S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuelmore » Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis Workshop Assistant: Geraldine Jean« less

  2. Final Scientific EFNUDAT Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachoudis, Vasilis

    2010-11-09

    Part 8. The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.eu Topics of interest include: Data evaluation Cross section measurements Experimental techniques Uncertainties and covariances Fission properties Current and future facilities International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary) E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France) F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany) R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuel Andriamonje Ericmore » Berthoumieux Carlos Guerrero Roberto LositoVasilis Vlachoudis Workshop Assistant: Geraldine Jean« less

  3. Final Scientific EFNUDAT Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-11-09

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.eu. Topics of interest include: Data evaluation; Cross section measurements; Experimental techniques; Uncertainties and covariances; Fission properties; Current and future facilities. International Advisory Committee: C. Barreau (CENBG, France) T. Belgya (IKI KFKI, Hungary) E. Gonzalez (CIEMAT, Spain) F. Gunsing (CEA, France) F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany) R. Nolte (PTB, Germany) S. Pomp (TSL UU, Sweden) & Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuelmore » Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Losito Vasilis Vlachoudis; Workshop Assistant: Geraldine Jean« less

  4. Final Scientific EFNUDAT Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-11-09

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive. EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluation Cross section measurements Experimental techniques Uncertainties and covariances Fission properties Current and future facilities ; International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden) ;Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco Calviani Samuel Andriamonje Eric Berthoumieux Carlos Guerrero Roberto Lositomore » Vasilis Vlachoudis;Workshop Assistant: Geraldine Jean« less

  5. Final Scientific EFNUDAT Workshop

    ScienceCinema

    None

    2017-12-09

    The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive.EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluationCross section measurementsExperimental techniquesUncertainties and covariancesFission propertiesCurrent and future facilities  International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco CalvianiSamuel AndriamonjeEric BerthoumieuxCarlos GuerreroRoberto LositoVasilis Vlachoudis Workshop Assistant: Géraldine Jean

  6. Angular distributions for high-mass jet pairs and a limit on the energy scale of compositeness for quarks from the CERN pp¯ collider

    NASA Astrophysics Data System (ADS)

    Arnison, G.; Albajar, C.; Albrow, M. G.; Allkofer, O. C.; Astbury, A.; Aubert, B.; Axon, T.; Bacci, C.; Bacon, T.; Batley, J. R.; Bauer, G.; Bellinger, J.; Bettini, A.; Bézaguet, A.; Bock, R. K.; Bos, K.; Buckley, E.; Busetto, G.; Catz, P.; Cennini, P.; Centro, S.; Ceradini, F.; Ciapetti, G.; Cittolin, S.; Clarke, D.; Cline, D.; Cochet, C.; Colas, J.; Colas, P.; Corden, M.; Coughlan, J. A.; Cox, G.; Dau, D.; Debeer, M.; Debrion, J. P.; Degiorgi, M.; Della Negra, M.; Demoulin, M.; Denby, B.; Denegri, D.; Diciaccio, A.; Dobrzynski, L.; Dorenbosch, J.; Dowell, J. D.; Duchovni, E.; Edgecock, R.; Eggert, K.; Eisenhandler, E.; Ellis, N.; Erhard, P.; Faissner, H.; Keeler, M. Fincke; Flynn, P.; Fontaine, G.; Frey, R.; Frühwirth, R.; Garvey, J.; Gee, D.; Geer, S.; Ghesquière, C.; Ghez, P.; Ghio, F.; Giacomelli, P.; Gibson, W. R.; Giraud-Héraud, Y.; Givernaud, A.; Gonidec, A.; Goodman, M.; Grassmann, H.; Grayer, G.; Guryn, W.; Hansl-Kozanecka, T.; Haynes, W.; Haywood, S. J.; Hoffmann, H.; Holthuizen, D. J.; Homer, R. J.; Honma, A.; Ikeda, M.; Jank, W.; Jimack, M.; Jorat, G.; Kalmus, P. I. P.; Karimäki, V.; Keeler, R.; Kenyon, I.; Kernan, A.; Kienzle, W.; Kinnunen, R.; Kozanecki, W.; Krammer, M.; Kroll, J.; Kryn, D.; Kyberd, P.; Lacava, F.; Laugier, J. P.; Lees, J. P.; Leuchs, R.; Levegrun, S.; Lévêque, A.; Levi, M.; Linglin, D.; Locci, E.; Long, K.; Markiewicz, T.; Markytan, M.; Martin, T.; Maurin, G.; McMahon, T.; Mendiburu, J.-P.; Meneguzzo, A.; Meyer, O.; Meyer, T.; Minard, M.-N.; Mohammad, M.; Morgan, K.; Moricca, M.; Moser, H.; Mours, B.; Muller, Th.; Nandi, A.; Naumann, L.; Norton, A.; Pascoli, D.; Pauss, F.; Perault, C.; Petrolo, E.; Mortari, G. Piano; Pietarinen, E.; Pigot, C.; Pimiä, M.; Pitman, D.; Placci, A.; Porte, J.-P.; Radermacher, E.; Ransdell, J.; Redelberger, T.; Reithler, H.; Revol, J. P.; Richman, J.; Rijssenbeek, M.; Robinson, D.; Rohlf, J.; Rossi, P.; Ruhm, W.; Rubbia, C.; Sajot, G.; Salvini, G.; Sass, J.; Sadoulet, B.; Samyn, D.; Savoy-Navarro, A.; Schinzel, D.; Schwartz, A.; Scott, W.; Shah, T. P.; Sheer, I.; Siotis, I.; Smith, D.; Sobie, R.; Sphicas, P.; Strauss, J.; Streets, J.; Stubenrauch, C.; Summers, D.; Sumorok, K.; Szoncso, F.; Tao, C.; Taurok, A.; Have, I. Ten; Tether, S.; Thompson, G.; Tscheslog, E.; Tuominiemi, J.; Van Eijk, B.; Verecchia, P.; Vialle, J. P.; Villasenor, L.; Virdee, T. S.; Von der Schmitt, H.; Von Schlippe, W.; Vrana, J.; Vuillemin, V.; Wahl, H. D.; Watkins, P.; Wildish, A.; Wilke, R.; Wilson, J.; Wingerter, I.; Wimpenny, S. J.; Wulz, C. E.; Wyatt, T.; Yvert, M.; Zaccardelli, C.; Zacharov, I.; Zaganidis, N.; Zanello, L.; Zotto, P.; UA1 Collaboration

    1986-09-01

    Angular distributions of high-mass jet pairs (180< m2 J<350 GeV) have been measured in the UA1 experiment at the CERN pp¯ Collider ( s=630 GeV) . We show that angular distributions are independent of the subprocess centre-of-mass (CM) energy over this range, and use the data to put constraints on the definition of the Q2 scale. The distribution for the very high mass jet pairs (240< m2 J<300 GeV) has also been used to obtain a lower limit on the energy scale Λ c of compositeness of quarks. We find Λ c>415 GeV at 95% confidence level.

  7. Brightness and uniformity measurements of plastic scintillator tiles at the CERN H2 test beam

    DOE PAGES

    Chatrchyan, S.; Sirunyan, A. M.; Tumasyan, A.; ...

    2018-01-05

    Here, we study the light output, light collection efficiency and signal timing of a variety of organic scintillators that are being considered for the upgrade of the hadronic calorimeter of the CMS detector. The experimental data are collected at the H2 test-beam area at CERN, using a 150 GeV muon beam. In particular, we investigate the usage of over-doped and green-emitting plastic scintillators, two solutions that have not been extensively considered. We present a study of the energy distribution in plastic-scintillator tiles, the hit efficiency as a function of the hit position, and a study of the signal timing formore » blue and green scintillators.« less

  8. Brightness and uniformity measurements of plastic scintillator tiles at the CERN H2 test beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Sirunyan, A. M.; Tumasyan, A.

    Here, we study the light output, light collection efficiency and signal timing of a variety of organic scintillators that are being considered for the upgrade of the hadronic calorimeter of the CMS detector. The experimental data are collected at the H2 test-beam area at CERN, using a 150 GeV muon beam. In particular, we investigate the usage of over-doped and green-emitting plastic scintillators, two solutions that have not been extensively considered. We present a study of the energy distribution in plastic-scintillator tiles, the hit efficiency as a function of the hit position, and a study of the signal timing formore » blue and green scintillators.« less

  9. The beam and detector of the NA62 experiment at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gil, E. Cortina; Albarrán, E. Martín; Minucci, E.

    NA62 is a fixed-target experiment at the CERN SPS dedicated to measurements of rare kaon decays. Such measurements, like the branching fraction of the K + → π + ν ν¯ decay, have the potential to bring significant insights into new physics processes when comparison is made with precise theoretical predictions. For this purpose, innovative techniques have been developed, in particular, in the domain of low-mass tracking devices. Detector construction spanned several years from 2009 to 2014. The collaboration started detector commissioning in 2014 and will collect data until the end of 2018. Furthermore, the beam line and detector componentsmore » are described together with their early performance obtained from 2014 and 2015 data.« less

  10. Prospects for K+ →π+ ν ν ‾ observation at CERN in NA62

    NASA Astrophysics Data System (ADS)

    Khoriauli, G.; Aglieri Rinella, G.; Aliberti, R.; Ambrosino, F.; Angelucci, B.; Antonelli, A.; Anzivino, G.; Arcidiacono, R.; Azhinenko, I.; Balev, S.; Bendotti, J.; Biagioni, A.; Biino, C.; Bizzeti, A.; Blazek, T.; Blik, A.; Bloch-Devaux, B.; Bolotov, V.; Bonaiuto, V.; Bragadireanu, M.; Britton, D.; Britvich, G.; Bucci, F.; Butin, F.; Capitolo, E.; Capoccia, C.; Capussela, T.; Carassiti, V.; Cartiglia, N.; Cassese, A.; Catinaccio, A.; Cecchetti, A.; Ceccucci, A.; Cenci, P.; Cerny, V.; Cerri, C.; Checcucci, B.; Chikilev, O.; Ciaranfi, R.; Collazuol, G.; Conovaloff, A.; Cooke, P.; Cooper, P.; Corradi, G.; Cortina Gil, E.; Costantini, F.; Cotta Ramusino, A.; Coward, D.; D'Agostini, G.; Dainton, J.; Dalpiaz, P.; Danielsson, H.; Degrange, J.; De Simone, N.; Di Filippo, D.; Di Lella, L.; Dixon, N.; Doble, N.; Duk, V.; Elsha, V.; Engelfried, J.; Enik, T.; Falaleev, V.; Fantechi, R.; Fascianelli, V.; Federici, L.; Fiorini, M.; Fry, J.; Fucci, A.; Fulton, L.; Gallorini, S.; Gamberini, E.; Gatignon, L.; Georgiev, G.; Gianoli, A.; Giorgi, M.; Giudici, S.; Glonti, L.; Goncalves Martins, A.; Gonnella, F.; Goudzovski, E.; Guida, R.; Gushchin, E.; Hahn, F.; Hallgren, B.; Heath, H.; Herman, F.; Hutchcroft, D.; Iacopini, E.; Imbergamo, E.; Jamet, O.; Jarron, P.; Kampf, K.; Kaplon, J.; Karjavin, V.; Kekelidze, V.; Kholodenko, S.; Khoriauli, G.; Khudyakov, A.; Kiryushin, Yu.; Kleinknecht, K.; Kluge, A.; Koval, M.; Kozhuharov, V.; Krivda, M.; Kudenko, Y.; Kunze, J.; Lamanna, G.; Lazzeroni, C.; Lenci, R.; Lenti, M.; Leonardi, E.; Lichard, P.; Lietava, R.; Litov, L.; Lomidze, D.; Lonardo, A.; Lurkin, N.; Madigozhin, D.; Maire, G.; Makarov, A.; Mandeiro, C.; Mannelli, I.; Mannocchi, G.; Mapelli, A.; Marchetto, F.; Marchevski, R.; Martellotti, S.; Massarotti, P.; Massri, K.; Matak, P.; Maurice, E.; Menichetti, E.; Mila, G.; Minucci, E.; Mirra, M.; Misheva, M.; Molokanova, N.; Morant, J.; Morel, M.; Moulson, M.; Movchan, S.; Munday, D.; Napolitano, M.; Neri, I.; Newson, F.; Norton, A.; Noy, M.; Nuessle, G.; Obraztsov, V.; Ostankov, A.; Padolski, S.; Page, R.; Palladino, V.; Pardons, A.; Parkinson, C.; Pedreschi, E.; Pepe, M.; Perez Gomez, F.; Perrin-Terrin, M.; Peruzzo, L.; Petrov, P.; Petrucci, F.; Piandani, R.; Piccini, M.; Pietreanu, D.; Pinzino, J.; Pivanti, M.; Polenkevich, I.; Popov, I.; Potrebenikov, Yu.; Protopopescu, D.; Raffaelli, F.; Raggi, M.; Riedler, P.; Romano, A.; Rubin, P.; Ruggiero, G.; Russo, V.; Ryjov, V.; Salamon, A.; Salina, G.; Samsonov, V.; Santoni, C.; Santovetti, E.; Saracino, G.; Sargeni, F.; Schifano, S.; Semenov, V.; Sergi, A.; Serra, M.; Shkarovskiy, S.; Soldi, D.; Sotnikov, A.; Sougonyaev, V.; Sozzi, M.; Spadaro, T.; Spinella, F.; Staley, R.; Statera, M.; Sutcliffe, P.; Szilasi, N.; Tagnani, D.; Valdata-Nappi, M.; Valente, P.; Vasile, M.; Vassilieva, T.; Velghe, B.; Veltri, M.; Venditti, S.; Volpe, R.; Vormstein, M.; Wahl, H.; Wanke, R.; Wertelaers, P.; Winhart, A.; Winston, R.; Wrona, B.; Yushchenko, O.; Zamkovsky, M.; Zinchenko, A.; NA62 Collaboration

    2016-01-01

    The main physics goal of the NA62 experiment at CERN is to precisely measure the branching ratio of the Kaon rare decay K+ →π+ ν ν ‾. This decay is strongly suppressed in the Standard Model. On the other hand its branching ratio is calculated with high accuracy. NA62 is designed to measure the K+ →π+ ν ν ‾ decay rate with an uncertainty better than 10%. The measurement can serve as a probe to some new physics phenomena, which can alter the decay rate. The NA62 experiment has been successfully launched in October 2014. The theory framework as well as the NA62 detector and the preliminary results are reviewed in this article.

  11. A Tony Thomas-Inspired Guide to INSPIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Connell, Heath B.; /Fermilab

    2010-04-01

    The SPIRES database was created in the late 1960s to catalogue the high energy physics preprints received by the SLAC Library. In the early 1990s it became the first database on the web and the first website outside of Europe. Although indispensible to the HEP community, its aging software infrastructure is becoming a serious liability. In a joint project involving CERN, DESY, Fermilab and SLAC, a new database, INSPIRE, is being created to replace SPIRES using CERN's modern, open-source Invenio database software. INSPIRE will maintain the content and functionality of SPIRES plus many new features. I describe this evolution frommore » the birth of SPIRES to the current day, noting that the career of Tony Thomas spans this timeline.« less

  12. How can we turn a science exhibition on a really success outreach activity?

    NASA Astrophysics Data System (ADS)

    Farrona, A. M. M.; Vilar, R.

    2016-04-01

    In April 2013, a CERN exhibition was shown in Santander: ;The largest scientific instrument ever built;. Around the exhibition, were proposed several activities: guide tours for children, younger and adults, workshops, film projections… In this way, the exhibition was visited by more than two thousand persons. We must keep in mind that Santander is a small city and it population does not usually take part in outreach activity. With this contribution, we want to teach the way in which it is possible to take advantage of science exhibitions. It made possible to show the Large Hadron Collider at CERN experiment to the great majority of Santander population, and to awaken their interest in or enthusiasm for science.

  13. Protonium production in ATHENA

    NASA Astrophysics Data System (ADS)

    Venturelli, L.; Amoretti, M.; Amsler, C.; Bonomi, G.; Carraro, C.; Cesar, C. L.; Charlton, M.; Doser, M.; Fontana, A.; Funakoshi, R.; Genova, P.; Hayano, R. S.; Jørgensen, L. V.; Kellerbauer, A.; Lagomarsino, V.; Landua, R.; Rizzini, E. Lodi; Macrì, M.; Madsen, N.; Manuzio, G.; Mitchard, D.; Montagna, P.; Posada, L. G.; Pruys, H.; Regenfus, C.; Rotondi, A.; Testera, G.; van der Werf, D. P.; Variola, A.; Yamazaki, Y.; Zurlo, N.; Athena Collaboration

    2007-08-01

    The ATHENA experiment at CERN, after producing cold antihydrogen atoms for the first time in 2002, has synthesised protonium atoms in vacuum at very low energies. Protonium, i.e. the antiproton-proton bound system, is of interest for testing fundamental physical theories. In the nested penning trap of the ATHENA apparatus protonium has been produced as result of a chemical reaction between an antiproton and the simplest matter molecule, H2+. The formed protonium atoms have kinetic energies in the range 40-700 meV and are metastable with mean lifetimes of the order of 1 μs. Our result shows that it will be possible to start measurements on protonium at low energy antiproton facilities, such as the AD at CERN or FLAIR at GSI.

  14. Towards the high-accuracy determination of the 238U fission cross section at the threshold region at CERN - n_TOF

    NASA Astrophysics Data System (ADS)

    Diakaki, M.; Audouin, L.; Berthoumieux, E.; Calviani, M.; Colonna, N.; Dupont, E.; Duran, I.; Gunsing, F.; Leal-Cidoncha, E.; Le Naour, C.; Leong, L. S.; Mastromarco, M.; Paradela, C.; Tarrio, D.; Tassan-Got, L.; Aerts, G.; Altstadt, S.; Alvarez, H.; Alvarez-Velarde, F.; Andriamonje, S.; Andrzejewski, J.; Badurek, G.; Barbagallo, M.; Baumann, P.; Becares, V.; Becvar, F.; Belloni, F.; Berthier, B.; Billowes, J.; Boccone, V.; Bosnar, D.; Brugger, M.; Calvino, F.; Cano-Ott, D.; Capote, R.; Carrapiço, C.; Cennini, P.; Cerutti, F.; Chiaveri, E.; Chin, M.; Cortes, G.; Cortes-Giraldo, M. A.; Cosentino, L.; Couture, A.; Cox, J.; David, S.; Dillmann, I.; Domingo-Pardo, C.; Dressler, R.; Dridi, W.; Eleftheriadis, C.; Embid-Segura, M.; Ferrant, L.; Ferrari, A.; Finocchiaro, P.; Fraval, K.; Fujii, K.; Furman, W.; Ganesan, S.; Garcia, A. R.; Giubrone, G.; Gomez-Hornillos, M. B.; Goncalves, I. F.; Gonzalez-Romero, E.; Goverdovski, A.; Gramegna, F.; Griesmayer, E.; Guerrero, C.; Gurusamy, P.; Haight, R.; Heil, M.; Heinitz, S.; Igashira, M.; Isaev, S.; Jenkins, D. G.; Jericha, E.; Kadi, Y.; Kaeppeler, F.; Karadimos, D.; Karamanis, D.; Kerveno, M.; Ketlerov, V.; Kivel, N.; Kokkoris, M.; Konovalov, V.; Krticka, M.; Kroll, J.; Lampoudis, C.; Langer, C.; Lederer, C.; Leeb, H.; Lo Meo, S.; Losito, R.; Lozano, M.; Manousos, A.; Marganiec, J.; Martinez, T.; Marrone, S.; Massimi, C.; Mastinu, P.; Mendoza, E.; Mengoni, A.; Milazzo, P. M.; Mingrone, F.; Mirea, M.; Mondelaers, W.; Moreau, C.; Mosconi, M.; Musumarra, A.; O'Brien, S.; Pancin, J.; Patronis, N.; Pavlik, A.; Pavlopoulos, P.; Perkowski, J.; Perrot, L.; Pigni, M. T.; Plag, R.; Plompen, A.; Plukis, L.; Poch, A.; Pretel, C.; Praena, J.; Quesada, J.; Rauscher, T.; Reifarth, R.; Riego, A.; Roman, F.; Rudolf, G.; Rubbia, C.; Rullhusen, P.; Salgado, J.; Santos, C.; Sarchiapone, L.; Sarmento, R.; Saxena, A.; Schillebeeckx, P.; Schmidt, S.; Schumann, D.; Stephan, C.; Tagliente, G.; Tain, J. L.; Tavora, L.; Terlizzi, R.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Versaci, R.; Vermeulen, M. J.; Villamarin, D.; Vincente, M. C.; Vlachoudis, V.; Vlastou, R.; Voss, F.; Wallner, A.; Walter, S.; Ware, T.; Weigand, M.; Weiß, C.; Wiesher, M.; Wisshak, K.; Wright, T.; Zugec, P.

    2016-03-01

    The 238U fission cross section is an international standard beyond 2 MeV where the fission plateau starts. However, due to its importance in fission reactors, this cross-section should be very accurately known also in the threshold region below 2 MeV. The 238U fission cross section has been measured relative to the 235U fission cross section at CERN - n_TOF with different detection systems. These datasets have been collected and suitably combined to increase the counting statistics in the threshold region from about 300 keV up to 3 MeV. The results are compared with other experimental data, evaluated libraries, and the IAEA standards.

  15. The high Beta cryo-modules and the associated cryogenic system for the HIE-ISOLDE upgrade at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delruelle, N.; Leclercq, Y.; Pirotte, O.

    2014-01-29

    The major upgrade of the energy and intensity of the existing ISOLDE and REX-ISOLDE radioactive ion beam facilities at CERN requires the replacement of most of the existing ISOLDE post-acceleration equipment by a superconducting linac based on quarter-wave resonators housed together with superconducting solenoids in a series of four high-β and two low-β cryo-modules. As well as providing optimum conditions for physics, the cryo-modules need to function under stringent vacuum and cryogenic conditions. We present the detail design and expected cryogenic performance of the high- β cryo-module together with the cryogenic supply and distribution system destined to service the completemore » superconducting linac.« less

  16. Proton enhancement at large pT at the CERN large hadron collider without structure in associated-particle distribution.

    PubMed

    Hwa, Rudolph C; Yang, C B

    2006-07-28

    The production of pions and protons in the pT range between 10 and 20 GeV/c for Pb+Pb collisions at CERN LHC is studied in the recombination model. It is shown that the dominant mechanism for hadronization is the recombination of shower partons from neighboring jets when the jet density is high. Protons are more copiously produced than pions in that pT range because the coalescing partons can have lower momentum fractions, but no thermal partons are involved. The proton-to-pion ratio can be as high as 20. When such high pT hadrons are used as trigger particles, there will not be any associated particles that are not in the background.

  17. Neutron capture cross section measurement of 151Sm at the CERN neutron time of flight facility (n_TOF).

    PubMed

    Abbondanno, U; Aerts, G; Alvarez-Velarde, F; Alvarez-Pol, H; Andriamonje, S; Andrzejewski, J; Badurek, G; Baumann, P; Becvár, F; Benlliure, J; Berthoumieux, E; Calviño, F; Cano-Ott, D; Capote, R; Cennini, P; Chepel, V; Chiaveri, E; Colonna, N; Cortes, G; Cortina, D; Couture, A; Cox, J; Dababneh, S; Dahlfors, M; David, S; Dolfini, R; Domingo-Pardo, C; Duran, I; Embid-Segura, M; Ferrant, L; Ferrari, A; Ferreira-Marques, R; Frais-Koelbl, H; Furman, W; Goncalves, I; Gallino, R; Gonzalez-Romero, E; Goverdovski, A; Gramegna, F; Griesmayer, E; Gunsing, F; Haas, B; Haight, R; Heil, M; Herrera-Martinez, A; Isaev, S; Jericha, E; Käppeler, F; Kadi, Y; Karadimos, D; Kerveno, M; Ketlerov, V; Koehler, P; Konovalov, V; Krticka, M; Lamboudis, C; Leeb, H; Lindote, A; Lopes, I; Lozano, M; Lukic, S; Marganiec, J; Marrone, S; Martinez-Val, J; Mastinu, P; Mengoni, A; Milazzo, P M; Molina-Coballes, A; Moreau, C; Mosconi, M; Neves, F; Oberhummer, H; O'Brien, S; Pancin, J; Papaevangelou, T; Paradela, C; Pavlik, A; Pavlopoulos, P; Perlado, J M; Perrot, L; Pignatari, M; Plag, R; Plompen, A; Plukis, A; Poch, A; Policarpo, A; Pretel, C; Quesada, J; Raman, S; Rapp, W; Rauscher, T; Reifarth, R; Rosetti, M; Rubbia, C; Rudolf, G; Rullhusen, P; Salgado, J; Soares, J C; Stephan, C; Tagliente, G; Tain, J; Tassan-Got, L; Tavora, L; Terlizzi, R; Vannini, G; Vaz, P; Ventura, A; Villamarin, D; Vincente, M C; Vlachoudis, V; Voss, F; Wendler, H; Wiescher, M; Wisshak, K

    2004-10-15

    The151Sm(n,gamma)152Sm cross section has been measured at the spallation neutron facility n_TOF at CERN in the energy range from 1 eV to 1 MeV. The new facility combines excellent resolution in neutron time-of-flight, low repetition rates, and an unsurpassed instantaneous luminosity, resulting in rather favorable signal/background ratios. The 151Sm cross section is of importance for characterizing neutron capture nucleosynthesis in asymptotic giant branch stars. At a thermal energy of kT=30 keV the Maxwellian averaged cross section of this unstable isotope (t(1/2)=93 yr) was determined to be 3100+/-160 mb, significantly larger than theoretical predictions.

  18. Black holes in many dimensions at the CERN Large Hadron Collider: testing critical string theory.

    PubMed

    Hewett, JoAnne L; Lillie, Ben; Rizzo, Thomas G

    2005-12-31

    We consider black hole production at the CERN Large Hadron Collider (LHC) in a generic scenario with many extra dimensions where the standard model fields are confined to a brane. With approximately 20 dimensions the hierarchy problem is shown to be naturally solved without the need for large compactification radii. We find that in such a scenario the properties of black holes can be used to determine the number of extra dimensions, . In particular, we demonstrate that measurements of the decay distributions of such black holes at the LHC can determine if is significantly larger than 6 or 7 with high confidence and thus can probe one of the critical properties of string theory compactifications.

  19. Measurement of the radiative capture cross section of the s-process branching points 204Tl and 171Tm at the n_TOF facility (CERN)

    NASA Astrophysics Data System (ADS)

    Casanovas, A.; Domingo-Pardo, C.; Guerrero, C.; Lerendegui-Marco, J.; Calviño, F.; Tarifeño-Saldivia, A.; Dressler, R.; Heinitz, S.; Kivel, N.; Quesada, J. M.; Schumann, D.; Aberle, O.; Alcayne, V.; Andrzejewski, J.; Audouin, L.; Bécares, V.; Bacak, M.; Barbagallo, M.; Bečvář, F.; Bellia, G.; Berthoumieux, E.; Billowes, J.; Bosnar, D.; Brown, A.; Busso, M.; Caamaño, M.; Caballero-Ontanaya, L.; Calviani, M.; Cano-Ott, D.; Cerutti, F.; Chen, Y. H.; Chiaveri, E.; Colonna, N.; Cortés, G.; Cortés-Giraldo, M. A.; Cosentino, L.; Cristallo, S.; Damone, L. A.; Diakaki, M.; Dietz, M.; Dupont, E.; Durán, I.; Eleme, Z.; Fernández-Domínguez, B.; Ferrari, A.; Ferreira, P.; Finocchiaro, P.; Furman, V.; Göbel, K.; Gawlik, A.; Gilardoni, S.; Glodariu, T.; Gonçalves, I. F.; González-Romero, E.; Gunsing, F.; Heyse, J.; Jenkins, D. G.; Käppeler, F.; Kadi, Y.; Katabuchi, T.; Kimura, A.; Kokkoris, M.; Kopatch, Y.; Krtička, M.; Kurtulgil, D.; Ladarescu, I.; Lederer-Woods, C.; Meo, S. Lo; Lonsdale, S. J.; Macina, D.; Martínez, T.; Masi, A.; Massimi, C.; Mastinu, P.; Mastromarco, M.; Matteucci, F.; Maugeri, E. A.; Mazzone, A.; Mendoza, E.; Mengoni, A.; Michalopoulou, V.; Milazzo, P. M.; Mingrone, F.; Musumarra, A.; Negret, A.; Nolte, R.; Ogállar, F.; Oprea, A.; Patronis, N.; Pavlik, A.; Perkowski, J.; Persanti, L.; Porras, I.; Praena, J.; Radeck, D.; Ramos, D.; Rauscher, T.; Reifarth, R.; Rochman, D.; Sabaté-Gilarte, M.; Saxena, A.; Schillebeeckx, P.; Simone, S.; Smith, A. G.; Sosnin, N. V.; Stamatopoulos, A.; Tagliente, G.; Tain, J. L.; Talip, T.; Tassan-Got, L.; Tsinganis, A.; Ulrich, J.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Vlachoudis, V.; Vlastou, R.; Wallner, A.; Woods, P. J.; Wright, T.; Žugec, P.; Köster, U.

    2018-05-01

    The neutron capture cross section of some unstable nuclei is especially relevant for s-process nucleosynthesis studies. This magnitude is crucial to determine the local abundance pattern, which can yield valuable information of the s-process stellar environment. In this work we describe the neutron capture (n,γ) measurement on two of these nuclei of interest, 204Tl and 171Tm, from target production to the final measurement, performed successfully at the n_TOF facility at CERN in 2014 and 2015. Preliminary results on the ongoing experimental data analysis will also be shown. These results include the first ever experimental observation of capture resonances for these two nuclei.

  20. Search for Invisible Decays of Sub-GeV Dark Photons in Missing-Energy Events at the CERN SPS.

    PubMed

    Banerjee, D; Burtsev, V; Cooke, D; Crivelli, P; Depero, E; Dermenev, A V; Donskov, S V; Dubinin, F; Dusaev, R R; Emmenegger, S; Fabich, A; Frolov, V N; Gardikiotis, A; Gninenko, S N; Hösgen, M; Kachanov, V A; Karneyeu, A E; Ketzer, B; Kirpichnikov, D V; Kirsanov, M M; Kovalenko, S G; Kramarenko, V A; Kravchuk, L V; Krasnikov, N V; Kuleshov, S V; Lyubovitskij, V E; Lysan, V; Matveev, V A; Mikhailov, Yu V; Myalkovskiy, V V; Peshekhonov, V D; Peshekhonov, D V; Petuhov, O; Polyakov, V A; Radics, B; Rubbia, A; Samoylenko, V D; Tikhomirov, V O; Tlisov, D A; Toropin, A N; Trifonov, A Yu; Vasilishin, B; Vasquez Arenas, G; Ulloa, P; Zhukov, K; Zioutas, K

    2017-01-06

    We report on a direct search for sub-GeV dark photons (A^{'}), which might be produced in the reaction e^{-}Z→e^{-}ZA^{'} via kinetic mixing with photons by 100 GeV electrons incident on an active target in the NA64 experiment at the CERN SPS. The dark photons would decay invisibly into dark matter particles resulting in events with large missing energy. No evidence for such decays was found with 2.75×10^{9} electrons on target. We set new limits on the γ-A^{'} mixing strength and exclude the invisible A^{'} with a mass ≲100  MeV as an explanation of the muon g_{μ}-2 anomaly.

Top