Challenges to Software/Computing for Experimentation at the LHC
NASA Astrophysics Data System (ADS)
Banerjee, Sunanda
The demands of future high energy physics experiments towards software and computing have led the experiments to plan the related activities as a full-fledged project and to investigate new methodologies and languages to meet the challenges. The paths taken by the four LHC experiments ALICE, ATLAS, CMS and LHCb are coherently put together in an LHC-wide framework based on Grid technology. The current status and understandings have been broadly outlined.
ACTS: from ATLAS software towards a common track reconstruction software
NASA Astrophysics Data System (ADS)
Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration
2017-10-01
Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.
Developments in the ATLAS Tracking Software ahead of LHC Run 2
NASA Astrophysics Data System (ADS)
Styles, Nicholas; Bellomo, Massimiliano; Salzburger, Andreas; ATLAS Collaboration
2015-05-01
After a hugely successful first run, the Large Hadron Collider (LHC) is currently in a shut-down period, during which essential maintenance and upgrades are being performed on the accelerator. The ATLAS experiment, one of the four large LHC experiments has also used this period for consolidation and further developments of the detector and of its software framework, ahead of the new challenges that will be brought by the increased centre-of-mass energy and instantaneous luminosity in the next run period. This is of particular relevance for the ATLAS Tracking software, responsible for reconstructing the trajectory of charged particles through the detector, which faces a steep increase in CPU consumption due to the additional combinatorics of the high-multiplicity environment. The steps taken to mitigate this increase and stay within the available computing resources while maintaining the excellent performance of the tracking software in terms of the information provided to the physics analyses will be presented. Particular focus will be given to changes to the Event Data Model, replacement of the maths library, and adoption of a new persistent output format. The resulting CPU profiling results will be discussed, as well as the performance of the algorithms for physics processes under the expected conditions for the next LHC run.
Simulation of the MoEDAL experiment
NASA Astrophysics Data System (ADS)
King, Matthew; MoEDAL Collaboration
2016-04-01
The MoEDAL experiment (Monopole and Exotics Detector at the LHC) is designed to directly search for magnetic monopoles and other highly ionising stable or meta-stable particles at the LHC. The MoEDAL detector comprises an array of plastic track detectors and aluminium trapping volumes around the P8 intersection region, opposite from the LHCb detector. TimePix devices are also installed for monitoring of the experiment. As MoEDAL mostly employs passive detectors the software development focusses on particle simulation, rather than digitisation or reconstruction. Here, we present the current status of the MoEDAL simulation software. Specifically, the development of a material description of the detector and simulations of monopole production and propagation at MoEDAL.
An efficient, modular and simple tape archiving solution for LHC Run-3
NASA Astrophysics Data System (ADS)
Murray, S.; Bahyl, V.; Cancio, G.; Cano, E.; Kotlyar, V.; Kruse, D. F.; Leduc, J.
2017-10-01
The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data rate of 150 petabytes per year as opposed to the current 50 petabytes per year. Secondly the software will need to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data. The tape storage software for LHC physics run 3 is code named CTA (the CERN Tape Archive). This paper describes how CTA will introduce a pre-emptive drive scheduler to use tape drives more efficiently, will encapsulate all tape software into a single module that will sit behind one or more EOS systems, and will be simpler by dropping support for obsolete backwards compatibility.
Federated software defined network operations for LHC experiments
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon
2013-09-01
The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.
Implementation of an object oriented track reconstruction model into multiple LHC experiments*
NASA Astrophysics Data System (ADS)
Gaines, Irwin; Gonzalez, Saul; Qian, Sijin
2001-10-01
An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.
NASA Astrophysics Data System (ADS)
Varela Rodriguez, F.
2011-12-01
The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trentadue, R.; Clemencic, M.; Dykstra, D.
The LCG Persistency Framework consists of three software packages (CORAL, COOL and POOL) that address the data access requirements of the LHC experiments in several different areas. The project is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that are using some or all of the Persistency Framework components to access their data. POOL is a hybrid technology store for C++ objects, using a mixture of streaming and relational technologies to implement both object persistency and object metadata catalogs and collections. CORAL is an abstraction layer with an SQL-free APImore » for accessing data stored using relational database technologies. COOL provides specific software components and tools for the handling of the time variation and versioning of the experiment conditions data. This presentation reports on the status and outlook in each of the three sub-projects at the time of the CHEP2012 conference, reviewing the usage of each package in the three LHC experiments.« less
MonALISA, an agent-based monitoring and control system for the LHC experiments
NASA Astrophysics Data System (ADS)
Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valassi, A.; Clemencic, M.; Dykstra, D.
The Persistency Framework consists of three software packages (CORAL, COOL and POOL) addressing the data access requirements of the LHC experiments in different areas. It is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that use this software to access their data. POOL is a hybrid technology store for C++ objects, metadata catalogs and collections. CORAL is a relational database abstraction layer with an SQL-free API. COOL provides specific software tools and components for the handling of conditions data. This paper reports on the status and outlook of the projectmore » and reviews in detail the usage of each package in the three experiments.« less
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
Multicore job scheduling in the Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.
2015-12-01
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valassi, A.; /CERN; Clemencic, M.
2012-04-19
The Persistency Framework consists of three software packages (CORAL, COOL and POOL) addressing the data access requirements of the LHC experiments in different areas. It is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that use this software to access their data. POOL is a hybrid technology store for C++ objects, metadata catalogs and collections. CORAL is a relational database abstraction layer with an SQL-free API. COOL provides specific software tools and components for the handling of conditions data. This paper reports on the status and outlook of the projectmore » and reviews in detail the usage of each package in the three experiments.« less
Deployment and Operational Experiences with CernVM-FS at the GridKa Tier-1 Center
NASA Astrophysics Data System (ADS)
Alef, Manfred; Jäger, Axel; Petzold and, Andreas; Verstege, Bernhard
2012-12-01
In 2012 the GridKa Tier-1 computing center hosts 130 kHS06 computing resources and 14PB disk and 17PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides a solution tailored to the requirement of the LHC VOs. We will focus on the first operational experiences and the monitoring of CernVM-FS on the worker nodes and the squid caches.
Large Scale Software Building with CMake in ATLAS
NASA Astrophysics Data System (ADS)
Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration
2017-10-01
The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.
A Roadmap for HEP Software and Computing R&D for the 2020s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, Antonio Augusto, Jr; et al.
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to preparemore » for this software upgrade.« less
The CMS electron and photon trigger for the LHC Run 2
NASA Astrophysics Data System (ADS)
Dezoort, Gage; Xia, Fan
2017-01-01
The CMS experiment implements a sophisticated two-level triggering system composed of Level-1, instrumented by custom-design hardware boards, and a software High-Level-Trigger. A new Level-1 trigger architecture with improved performance is now being used to maintain the thresholds that were used in LHC Run I for the more challenging luminosity conditions experienced during Run II. The upgrades to the calorimetry trigger will be described along with performance data. The algorithms for the selection of final states with electrons and photons, both for precision measurements and for searches of new physics beyond the Standard Model, will be described in detail.
RICH upgrade in LHCb experiment
NASA Astrophysics Data System (ADS)
Pistone, A.; LHCb RICH Collaboration
2017-01-01
The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). The second long shutdown of the LHC is currently scheduled to begin in 2019. During this period the LHCb experiment with all its sub-detectors will be upgraded in order to run at an instantaneous luminosity of 2 × 10^{33} cm ^{-2} s ^{-1} , about a factor 5 higher than the current luminosity, and to read out data at a rate of 40MHz into a flexible software-based trigger. The Ring Imaging CHerenkov (RICH) system will require new photon detectors and modifications to the optics of the upstream detector. Tests of the prototype of the smallest constituent of the new RICH system have been performed during testbeam sessions at the North Area test beam facility at CERN in the last years.
Final Report: High Energy Physics at the Energy Frontier at Louisiana Tech
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawyer, Lee; Wobisch, Markus; Greenwood, Zeno D.
The Louisiana Tech University High Energy Physics group has developed a research program aimed at experimentally testing the Standard Model of particle physics and searching for new phenomena through a focused set of analyses in collaboration with the ATLAS experiment at the Large Hadron Collider (LHC) at the CERN laboratory in Geneva. This research program includes involvement in the current operation and maintenance of the ATLAS experiment and full involvement in Phase 1 and Phase 2 upgrades in preparation for future high luminosity (HL-LHC) operation of the LHC. Our focus is solely on the ATLAS experiment at the LHC, withmore » some related detector development and software efforts. We have established important service roles on ATLAS in five major areas: Triggers, especially jet triggers; Data Quality monitoring; grid computing; GPU applications for upgrades; and radiation testing for upgrades. Our physics research is focused on multijet measurements and top quark physics in final states containing tau leptons, which we propose to extend into related searches for new phenomena. Focusing on closely related topics in the jet and top analyses and coordinating these analyses in our group has led to high efficiency and increased visibility inside the ATLAS collaboration and beyond. Based on our work in the DØ experiment in Run II of the Fermilab Tevatron Collider, Louisiana Tech has developed a reputation as one of the leading institutions pursuing jet physics studies. Currently we are applying this expertise to the ATLAS experiment, with several multijet analyses in progress.« less
Software engineering techniques and CASE tools in RD13
NASA Astrophysics Data System (ADS)
Buono, S.; Gaponenko, I.; Jones, R.; Khodabandeh, A.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Skiadelli, M.; Toppers, A.; Duval, P. Y.; Ferrato, D.; Le Van Suu, A.; Qian, Z.; Rondot, C.; Ambrosini, G.; Fumagalli, G.; Polesello, G.; Aguer, M.; Huet, M.
1994-12-01
The RD13 project was approved in April 1991 for the development of a scalable data-taking system suitable for hosting various LHC studies. One of its goals is the exploitation of software engineering techniques, in order to indicate their overall suitability for data acquisition (DAQ), software design and implementation. This paper describes how such techniques have been applied to the development of components of the RD13 DAQ used in test-beam runs at CERN. We describe our experience with the Artifex CASE tool and its associated methodology. The issues raised when code generated by a CASE tool has to be integrated into an existing environment are also discussed.
The ATLAS Data Acquisition System: from Run 1 to Run 2
NASA Astrophysics Data System (ADS)
Panduro Vazquez, William; ATLAS Collaboration
2016-04-01
The experience gained during the first period of very successful data taking of the ATLAS experiment (Run 1) has inspired a number of ideas for improvement of the Data Acquisition (DAQ) system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013/14. We have updated the data-flow architecture, rewritten an important fraction of the software and replaced hardware, profiting from state of the art technologies. This paper summarizes the main changes that have been applied to the ATLAS DAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run 2. Particular emphasis will be put on explaining the reasons for our architectural and technical choices, as well as on the simulation and testing approach used to validate this system.
Advanced Operating System Technologies
NASA Astrophysics Data System (ADS)
Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro
In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.
Pc as Physics Computer for Lhc ?
NASA Astrophysics Data System (ADS)
Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.
In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.
Open access to high-level data and analysis tools in the CMS experiment at the LHC
Calderon, A.; Colling, D.; Huffman, A.; ...
2015-12-23
The CMS experiment, in recognition of its commitment to data preservation and open access as well as to education and outreach, has made its first public release of high-level data under the CC0 waiver: up to half of the proton-proton collision data (by volume) at 7 TeV from 2010 in CMS Analysis Object Data format. CMS has prepared, in collaboration with CERN and the other LHC experiments, an open-data web portal based on Invenio. The portal provides access to CMS public data as well as to analysis tools and documentation for the public. The tools include an event display andmore » histogram application that run in the browser. In addition a virtual machine containing a CMS software environment along with XRootD access to the data is available. Within the virtual machine the public can analyse CMS data, example code is provided. As a result, we describe the accompanying tools and documentation and discuss the first experiences of data use.« less
Volunteer Clouds and Citizen Cyberscience for LHC Physics
NASA Astrophysics Data System (ADS)
Aguado Sanchez, Carlos; Blomer, Jakob; Buncic, Predrag; Chen, Gang; Ellis, John; Garcia Quintas, David; Harutyunyan, Artem; Grey, Francois; Lombrana Gonzalez, Daniel; Marquina, Miguel; Mato, Pere; Rantala, Jarno; Schulz, Holger; Segal, Ben; Sharma, Archana; Skands, Peter; Weir, David; Wu, Jie; Wu, Wenjing; Yadav, Rohit
2011-12-01
Computing for the LHC, and for HEP more generally, is traditionally viewed as requiring specialized infrastructure and software environments, and therefore not compatible with the recent trend in "volunteer computing", where volunteers supply free processing time on ordinary PCs and laptops via standard Internet connections. In this paper, we demonstrate that with the use of virtual machine technology, at least some standard LHC computing tasks can be tackled with volunteer computing resources. Specifically, by presenting volunteer computing resources to HEP scientists as a "volunteer cloud", essentially identical to a Grid or dedicated cluster from a job submission perspective, LHC simulations can be processed effectively. This article outlines both the technical steps required for such a solution and the implications for LHC computing as well as for LHC public outreach and for participation by scientists from developing regions in LHC research.
Trigger Menu-aware Monitoring for the ATLAS experiment
NASA Astrophysics Data System (ADS)
Hoad, Xanthe; ATLAS Collaboration
2017-10-01
We present a“trigger menu-aware” monitoring system designed for the Run-2 data-taking of the ATLAS experiment at the LHC. Unlike Run-1, where a change in the trigger menu had to be matched by the installation of a new software release at Tier-0, the new monitoring system aims to simplify the ATLAS operational workflows. This is achieved by integrating monitoring updates in a quick and flexible manner via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the new system with the 2016 collision data.
The CMS High-Level Trigger and Trigger Menus
NASA Astrophysics Data System (ADS)
Avetisyan, Aram
2008-04-01
The CMS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14 TeV, with a bunch-crossing rate of 40 MHz. The online event selection for the CMS experiment is carried out in two distinct stages. At Level-1 the trigger electronics reduces the 40 MHz collision rate to provide up to 100 kHz of interesting events, based on objects found using its calorimeter and muon subsystems. The High Level Trigger (HLT) that runs in the Filter Farm of the CMS experiment is a set of sophisticated software tools that run in a real-time environment to make a further selection and archive few hundred Hz of interesting events. The coherent tuning of the HLT algorithms to accommodate multiple physics channels is a key issue for CMS, one that literally defines the reach of the experiment's physics program. In this presentation we will discuss the strategies and trigger configuration developed for startup physics program of the CMS experiment, up to a luminosity of 10^31 s-1cm-2. Emphasis will be given to the full trigger menus, including physics and calibration triggers.
Design of an AdvancedTCA board management controller (IPMC)
NASA Astrophysics Data System (ADS)
Mendez, J.; Bobillier, V.; Haas, S.; Joos, M.; Mico, S.; Vasey, F.
2017-03-01
The AdvancedTCA (ATCA) standard has been selected as the hardware platform for the upgrade of the back-end electronics of the CMS and ATLAS experiments of the Large Hadron Collider (LHC) . In this context, the electronic systems for experiments group at CERN is running a project to evaluate, specify, design and support xTCA equipment. As part of this project, an Intelligent Platform Management Controller (IPMC) for ATCA blades, based on a commercial solution, has been designed to be used on existing and future ATCA blades. This paper reports on the status of this project presenting the hardware and software developments.
The ATLAS Simulation Infrastructure
Aad, G.; Abbott, B.; Abdallah, J.; ...
2010-09-25
The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
Claus, R.
2015-10-23
The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQmore » building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. Furthermore, the full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.« less
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
NASA Astrophysics Data System (ADS)
Claus, R.; ATLAS Collaboration
2016-07-01
The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.
Modelling and measurements of bunch profiles at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papadopoulou, S.; Antoniou, F.; Argyropoulos, T.
The bunch profiles in the LHC are often observed to be non-Gaussian, both at Flat Bottom (FB) and Flat Top (FT) energies. Especially at FT, an evolution of the tail population in time is observed. In this respect, the Monte-Carlo Software for IBS and Radiation effects (SIRE) is used to track different types of beam distributions. The impact of the distribution shape on the evolution of bunch characteristics is studied. The results are compared with observations from the LHC Run 2 data.
First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS
NASA Astrophysics Data System (ADS)
Aperio Bella, L.; Barberis, D.; Buttinger, W.; Formica, A.; Gallas, E. J.; Rinaldi, L.; Rybkin, G.; ATLAS Collaboration
2017-10-01
Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.
Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger
NASA Astrophysics Data System (ADS)
Conde Muíño, P.; ATLAS Collaboration
2017-10-01
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.
The CMS High Level Trigger System: Experience and Future Development
NASA Astrophysics Data System (ADS)
Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.
2012-12-01
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
A New Event Builder for CMS Run II
NASA Astrophysics Data System (ADS)
Albertsson, K.; Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
A new event builder for CMS Run II
Albertsson, K.; Andre, J-M; Andronidis, A.; ...
2015-12-23
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Innibandmore » FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. In conclusion, ee present performance measurements from small-scale prototypes and from the full-scale production system.« less
Tracking at High Level Trigger in CMS
NASA Astrophysics Data System (ADS)
Tosi, M.
2016-04-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
Impact of detector simulation in particle physics collider experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvira, V. Daniel
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
Elvira, V. Daniel
2017-06-01
Through the last three decades, precise simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detectormore » simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the accuracy of the physics results and publication turnaround, from data-taking to submission. It also presents the economic impact and cost of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data, taxing heavily the performance of simulation and reconstruction software for increasingly complex detectors. Consequently, it becomes urgent to find solutions to speed up simulation software in order to cope with the increased demand in a time of flat budgets. The study ends with a short discussion on the potential solutions that are being explored, by leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering of HEP code for concurrency and parallel computing.« less
Impact of detector simulation in particle physics collider experiments
NASA Astrophysics Data System (ADS)
Daniel Elvira, V.
2017-06-01
Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.
NASA Astrophysics Data System (ADS)
Bianco, M.; Martoiu, S.; Sidiropoulou, O.; Zibell, A.
2015-12-01
A Micromegas (MM) quadruplet prototype with an active area of 0.5 m2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Software, a dedicated Micromegas segment has been implemented, in order to include the detector inside the main ATLAS DAQ partition. A full set of tests, on the hardware and software aspects, is presented.
Data Reprocessing on Worldwide Distributed Systems
NASA Astrophysics Data System (ADS)
Wicke, Daniel
The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.
CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valassi, A.; /CERN; Bartoldus, R.
The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farmmore » of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.« less
The evolution of the Trigger and Data Acquisition System in the ATLAS experiment
NASA Astrophysics Data System (ADS)
Krasznahorkay, A.; Atlas Collaboration
2014-06-01
The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of the upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on separate, commodity hardware nodes. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. We will also be upgrading the hardware of the TDAQ system by introducing new elements to it. For the high-level trigger, the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still separated, on a unique hardware node. Prototyping efforts already demonstrated many benefits to the simplified design. In this paper we report on the design and the development status of this new system.
Evolution of the ATLAS Software Framework towards Concurrency
NASA Astrophysics Data System (ADS)
Jones, R. W. L.; Stewart, G. A.; Leggett, C.; Wynne, B. M.
2015-05-01
The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. Maximising performance per watt will be a key metric, so all of these cores must be used as efficiently as possible. In order to address the deficiencies of the current framework, ATLAS has embarked upon two projects: first, a practical demonstration of the use of multi-threading in our reconstruction software, using the GaudiHive framework; second, an exercise to gather requirements for an updated framework, going back to the first principles of how event processing occurs. In this paper we report on both these aspects of our work. For the hive based demonstrators, we discuss what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. These lessons were fed into our considerations of a new framework and we present preliminary conclusions on this work. In particular we identify areas where the framework can be simplified in order to aid the implementation of a concurrent event processing scheme. Finally, we discuss the practical difficulties involved in migrating a large established code base to a multi-threaded framework and how this can be achieved for LHC Run 3.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...
2017-12-06
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
NASA Astrophysics Data System (ADS)
Balcas, J.; Hendricks, T. W.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
The SDN Next Generation Integrated Architecture (SDN-NGeNIA) project addresses some of the key challenges facing the present and next generations of science programs in HEP, astrophysics, and other fields, whose potential discoveries depend on their ability to distribute, process and analyze globally distributed Petascale to Exascale datasets. The SDN-NGenIA system under development by Caltech and partner HEP and network teams is focused on the coordinated use of network, computing and storage infrastructures, through a set of developments that build on the experience gained in recently completed and previous projects that use dynamic circuits with bandwidth guarantees to support major network flows, as demonstrated across LHC Open Network Environment [1] and in large scale demonstrations over the last three years, and recently integrated with PhEDEx and Asynchronous Stage Out data management applications of the CMS experiment at the Large Hadron Collider. In addition to the general program goals of supporting the network needs of the LHC and other science programs with similar needs, a recent focus is the use of the Leadership HPC facility at Argonne National Lab (ALCF) for data intensive applications.
The ATLAS Data Acquisition System in LHC Run 2
NASA Astrophysics Data System (ADS)
Panduro Vazquez, William; ATLAS Collaboration
2017-10-01
The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. The Trigger and Data Acquisition system of the ATLAS experiment has been upgraded to deal with the increased performance required by this new operational mode. The dataflow system and associated network infrastructure have been reshaped in order to benefit from technological progress and to maximize the flexibility and efficiency of the data selection process. The new design is radically different from the previous implementation both in terms of architecture and performance, with the previous two-level structure merged into a single processing farm, performing incremental data collection and analysis. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. This farm master has also been integrated with a new software-based Region of Interest builder, replacing the previous VMEbus-based system. Finally, the Readout system has been completely refitted with new higher performance, lower footprint server machines housing a new custom front-end interface card. Here we will cover the overall design of the system, along with performance results from the start-up phase of LHC Run 2.
Physics opportunities with a fixed target experiment at the LHC (AFTER@LHC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadjidakis, Cynthia; Anselmino, Mauro; Arnaldi, R.
By extracting the beam with a bent crystal or by using an internal gas target, the multi-TeV proton and lead LHC beams allow one to perform the most energetic fixed-target experiments (AFTER@LHC) and to study p+p and p+A collisions at \\sqrt{s_NN}=115 GeV and Pb+p and Pb+A collisions at \\sqrt{s_NN}=72 GeV. Such studies would address open questions in the domain of the nucleon and nucleus partonic structure at high-x, quark-gluon plasma and, by using longitudinally or transversally polarised targets, spin physics. In this paper, we discuss the physics opportunities of a fixed-target experiment at the LHC and we report on themore » possible technical implementations of a high-luminosity experiment. We finally present feasibility studies for Drell-Yan, open heavy-flavour and quarkonium production, with an emphasis on high-x and spin physics.« less
A TTC upgrade proposal using bidirectional 10G-PON FTTH technology
NASA Astrophysics Data System (ADS)
Kolotouros, D. M.; Baron, S.; Soos, C.; Vasey, F.
2015-04-01
A new generation FPGA-based Timing-Trigger and Control (TTC) system based on emerging Passive Optical Network (PON) technology is being proposed to replace the existing off-detector TTC system used by the LHC experiments. High split ratio, dynamic software partitioning, low and deterministic latency, as well as low jitter are required. Exploiting the latest available technologies allows delivering higher capacity together with bidirectionality, a feature absent from the legacy TTC system. This article focuses on the features and capabilities of the latest TTC-PON prototype based on 10G-PON FTTH components along with some metrics characterizing its performance.
Readout and Trigger for the AFP Detector at the ATLAS Experiment at LHC
NASA Astrophysics Data System (ADS)
Korcyl, K.; Kocian, M.; Lopez Paz, I.; Avoni, G.
2017-10-01
The ATLAS Forward Proton is a new detector system in ATLAS that allows study of events with protons scattered at very small angles. The final design assumes four stations at distances of 205 and 217 m from the ATLAS interaction point on both sides of the detector exploiting the Roman Pot technology. In 2016 two stations in one arm were installed; installation of the other two is planned for 2017. This article describes details of the installed hardware, firmware and software leading to the full integration with the ATLAS central trigger and data acquisition systems.
The LHCb trigger and its upgrade
NASA Astrophysics Data System (ADS)
Dziurda, A.; LHCb Trigger Group
2016-07-01
The current LHCb trigger system consists of a hardware level, which reduces the LHC inelastic collision rate of 30 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20 k parallel-processing CPUs, the event rate is reduced to about 5 kHz. We review the performance of the LHCb trigger system during Run I of the LHC. Special attention is given to the use of multivariate analyses in the High Level Trigger. The major bottleneck for hadronic decays is the hardware trigger. LHCb plans a major upgrade of the detector and DAQ system in the LHC shutdown of 2018, enabling a purely software based trigger to process the full 30 MHz of inelastic collisions delivered by the LHC. We demonstrate that the planned architecture will be able to meet this challenge.
Experiential learning in high energy physics: a survey of students at the LHC
NASA Astrophysics Data System (ADS)
Camporesi, Tiziano; Catalano, Gelsomina; Florio, Massimo; Giffoni, Francesco
2017-03-01
More than 36 000 students and post-docs will be involved until 2025 in research at the Large Hadron Collider (LHC) mainly through international collaborations. To what extent they value the skills acquired? Do students expect that their learning experience will have an impact on their professional future? By drawing from earlier literature on experiential learning, we have designed a survey of current and former students at LHC. To quantitatively measure the students’ perceptions, we compare the salary expectations of current students with the assessment of those now employed in different jobs. Survey data are analysed by ordered logistic regression models, which allow multivariate statistical analyses with limited dependent variables. Results suggest that experiential learning at LHC positively correlates with both current and former students’ salary expectations. Those already employed clearly confirm the expectations of current students. At least two not mutually exclusive explanations underlie the results. First, the training at LHC is perceived to provide students valuable skills, which in turn affect the salary expectations; secondly, the LHC research experience per se may act as signal in the labour market. Respondents put a price tag on their learning experience, a ‘LHC salary premium’ ranging from 5% to 12% compared with what they would have expected for their career without such an experience at CERN.
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
NASA Astrophysics Data System (ADS)
Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.
2016-01-01
The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
Bartoldus, R.; Claus, R.; Garelli, N.; ...
2016-01-25
The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all ofmore » these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. In conclusion, we will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.« less
Physics perspectives with AFTER@LHC (A Fixed Target ExpeRiment at LHC)
NASA Astrophysics Data System (ADS)
Massacrier, L.; Anselmino, M.; Arnaldi, R.; Brodsky, S. J.; Chambert, V.; Da Silva, C.; Didelez, J. P.; Echevarria, M. G.; Ferreiro, E. G.; Fleuret, F.; Gao, Y.; Genolini, B.; Hadjidakis, C.; Hřivnáčová, I.; Kikola, D.; Klein, A.; Kurepin, A.; Kusina, A.; Lansberg, J. P.; Lorcé, C.; Lyonnet, F.; Martinez, G.; Nass, A.; Pisano, C.; Robbe, P.; Schienbein, I.; Schlegel, M.; Scomparin, E.; Seixas, J.; Shao, H. S.; Signori, A.; Steffens, E.; Szymanowski, L.; Topilskaya, N.; Trzeciak, B.; Uggerhøj, U. I.; Uras, A.; Ulrich, R.; Wagner, J.; Yamanaka, N.; Yang, Z.
2018-02-01
AFTER@LHC is an ambitious fixed-target project in order to address open questions in the domain of proton and neutron spins, Quark Gluon Plasma and high-x physics, at the highest energy ever reached in the fixed-target mode. Indeed, thanks to the highly energetic 7 TeV proton and 2.76 A.TeV lead LHC beams, center-of-mass energies as large as = 115 GeV in pp/pA and = 72 GeV in AA can be reached, corresponding to an uncharted energy domain between SPS and RHIC. We report two main ways of performing fixed-target collisions at the LHC, both allowing for the usage of one of the existing LHC experiments. In these proceedings, after discussing the projected luminosities considered for one year of data taking at the LHC, we will present a selection of projections for light and heavy-flavour production.
The LHCf experiment at the LHC: Physics Goals and Status
NASA Astrophysics Data System (ADS)
Tricomi, A.; Adriani, O.; Bonechi, L.; Bongi, M.; Castellini, G.; D'Alessandro, R.; Faus, A.; Fukui, K.; Haguenauer, M.; Itow, Y.; Kasahara, K.; Macina, D.; Mase, T.; Masuda, K.; Matsubara, Y.; Menjo, H.; Mizuishi, M.; Muraki, Y.; Papini, P.; Perrot, A. L.; Ricciarini, S.; Sako, T.; Shimizu, Y.; Taki, K.; Tamura, T.; Torii, S.; Turner, W. C.; Velasco, J.; Viciani, A.; Yoshida, K.
2009-12-01
The LHCf experiment is the smallest of the six experiments installed at the Large Hadron Collider (LHC). While the general purpose detectors have been mainly designed to answer the open questions of Elementary Particle Physics, LHCf has been designed as a fully devoted Astroparticle experiment at the LHC. Indeed, thanks to the excellent performances of its double arm calorimeters, LHCf will be able to measure the flux of neutral particles produced in p-p collisions at LHC in the very forward region, thus providing an invaluable help in the calibration of air-shower Monte Carlo codes currently used for modeling cosmic rays interactions in the Earth atmosphere. Depending on the LHC machine schedule, LHCf will take data in an energy range from 900 GeV up to 14 TeV in the centre of mass system (equivalent to 10 eV in the laboratory frame), thus covering one of the most interesting and debated region of the Cosmic Ray spectrum, the region around and beyond the "knee".
CERN Computing in Commercial Clouds
NASA Astrophysics Data System (ADS)
Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.
2017-10-01
By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
Simulation of the cabling process for Rutherford cables: An advanced finite element model
NASA Astrophysics Data System (ADS)
Cabanes, J.; Garlasche, M.; Bordini, B.; Dallocchio, A.
2016-12-01
In all existing large particle accelerators (Tevatron, HERA, RHIC, LHC) the main superconducting magnets are based on Rutherford cables, which are characterized by having: strands fully transposed with respect to the magnetic field, a significant compaction that assures a large engineering critical current density and a geometry that allows efficient winding of the coils. The Nb3Sn magnets developed in the framework of the HL-LHC project for improving the luminosity of the Large Hadron Collider (LHC) are also based on Rutherford cables. Due to the characteristics of Nb3Sn wires, the cabling process has become a crucial step in the magnet manufacturing. During cabling the wires experience large plastic deformations that strongly modify the geometrical dimensions of the sub-elements constituting the superconducting strand. These deformations are particularly severe on the cable edges and can result in a significant reduction of the cable critical current as well as of the Residual Resistivity Ratio (RRR) of the stabilizing copper. In order to understand the main parameters that rule the cabling process and their impact on the cable performance, CERN has developed a 3D Finite Element (FE) model based on the LS-Dyna® software that simulates the whole cabling process. In the paper the model is presented together with a comparison between experimental and numerical results for a copper cable produced at CERN.
High-throughput landslide modelling using computational grids
NASA Astrophysics Data System (ADS)
Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.
2012-04-01
Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle physicists and geographical scientists are collaborating to develop methods for providing simple and effective access to landslide models and associated simulation data. Particle physicists have valuable experience in dealing with data complexity and management due to the scale of data generated by particle accelerators such as the Large Hadron Collider (LHC). The LHC generates tens of petabytes of data every year which is stored and analysed using the Worldwide LHC Computing Grid (WLCG). Tools and concepts from the WLCG are being used to drive the development of a Software-as-a-Service (SaaS) platform to provide access to hosted landslide simulation software and data. It contains advanced data management features and allows landslide simulations to be run on the WLCG, dramatically reducing simulation runtimes by parallel execution. The simulations are accessed using a web page through which users can enter and browse input data, submit jobs and visualise results. Replication of the data ensures a local copy can be accessed should a connection to the platform be unavailable. The platform does not know the details of the simulation software it runs, so it is therefore possible to use it to run alternative models at similar scales. This creates the opportunity for activities such as model sensitivity analysis and performance comparison at scales that are impractical using standalone software.
NASA Astrophysics Data System (ADS)
Malik, S.; Shipsey, I.; Cavanaugh, R.; Bloom, K.; Chan, Kai-Feng; D'Hondt, J.; Klima, B.; Narain, M.; Palla, F.; Rolandi, G.; Schörner-Sadenius, T.
2014-06-01
To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.
Evolution of user analysis on the grid in ATLAS
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.; ATLAS Collaboration
2017-10-01
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
Elastic extension of a local analysis facility on external clouds for the LHC experiments
NASA Astrophysics Data System (ADS)
Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.
2017-10-01
The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.
HL-LHC and HE-LHC Upgrade Plans and Opportunities for US Participation
NASA Astrophysics Data System (ADS)
Apollinari, Giorgio
2017-01-01
The US HEP community has identified the exploitation of physics opportunities at the High Luminosity-LHC (HL-LHC) as the highest near-term priority. Thanks to multi-year R&D programs, US National Laboratories and Universities have taken the leadership in the development of technical solutions to increase the LHC luminosity, enabling the HL-LHC Project and uniquely positioning this country to make critical contributions to the LHC luminosity upgrade. This talk will describe the shaping of the US Program to contribute in the next decade to HL-LHC through newly developed technologies such as Nb3Sn focusing magnets or superconducting crab cavities. The experience gained through the execution of the HL-LHC Project in the US will constitute a pool of knowledge and capabilities allowing further developments in the future. Opportunities for US participations in proposed hadron colliders, such as a possible High Energy-LHC (HE-LHC), will be described as well.
The Machine / Job Features Mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alef, M.; Cass, T.; Keijser, J. J.
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less
The machine/job features mechanism
NASA Astrophysics Data System (ADS)
Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.
2017-10-01
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.
The Particle Physics Playground website: tutorials and activities using real experimental data
NASA Astrophysics Data System (ADS)
Bellis, Matthew; CMS Collaboration
2016-03-01
The CERN Open Data Portal provides access to data from the LHC experiments to anyone with the time and inclination to learn the analysis procedures. The CMS experiment has made a significant amount of data availible in basically the same format the collaboration itself uses, along with software tools and a virtual enviroment in which to run those tools. These same data have also been mined for educational exercises that range from very simple .csv files that can be analyzed in a spreadsheet to more sophisticated formats that use ROOT, a dominant software package in experimental particle physics but not used as much in the general computing community. This talk will present the Particle Physics Playground website (http://particle-physics-playground.github.io/), a project that uses data from the CMS experiment, as well as the older CLEO experiment, in tutorials and exercises aimed at high school and undergraduate students and other science enthusiasts. The data are stored as text files and the users are provided with starter Python/Jupyter notebook programs and accessor functions which can be modified to perform fairly high-level analyses. The status of the project, success stories, and future plans for the website will be presented. This work was supported in part by NSF Grant PHY-1307562.
Building analytical platform with Big Data solutions for log files of PanDA infrastructure
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.
2018-05-01
The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.
NASA Astrophysics Data System (ADS)
Belyaev, Alexander; Cacciapaglia, Giacomo; Ivanov, Igor P.; Rojas-Abatte, Felipe; Thomas, Marc
2018-02-01
The inert two-Higgs-doublet model (i2HDM) is a theoretically well-motivated example of a minimal consistent dark matter (DM) model which provides monojet, mono-Z , mono-Higgs, and vector-boson-fusion +ETmiss signatures at the LHC, complemented by signals in direct and indirect DM search experiments. In this paper we have performed a detailed analysis of the constraints in the full five-dimensional parameter space of the i2HDM, coming from perturbativity, unitarity, electroweak precision data, Higgs data from the LHC, DM relic density, direct/indirect DM detection, and LHC monojet analysis, as well as implications of experimental LHC studies on disappearing charged tracks relevant to a high DM mass region. We demonstrate the complementarity of the above constraints and present projections for future LHC data and direct DM detection experiments to probe further i2HDM parameter space. The model is implemented into the CalcHEP and micrOMEGAs packages, which are publicly available at the HEPMDB database, and it is ready for a further exploration in the context of the LHC, relic density, and DM direct detection.
Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.
2013-01-01
The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163
CMS Centres Worldwide - a New Collaborative Infrastructure
NASA Astrophysics Data System (ADS)
Taylor, Lucas
2011-12-01
The CMS Experiment at the LHC has established a network of more than fifty inter-connected "CMS Centres" at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running "telepresence" video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.
Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT
NASA Astrophysics Data System (ADS)
Wynne, Ben; ATLAS Collaboration
2017-10-01
We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent execution of algorithms within an event. This has the potential to significantly reduce the memory footprint on future manycore devices. An additional benefit of the HLT implementation within AthenaMT is that it facilitates the integration of offline code into the HLT. The trigger must retain high rejection in the face of increasing numbers of pileup collisions. This will be achieved by greater use of offline algorithms that are designed to maximize the discrimination of signal from background. Therefore a unification of the HLT and offline reconstruction software environment is required. This has been achieved while at the same time retaining important HLT-specific optimisations that minimize the computation performed to reach a trigger decision. Such optimizations include early event rejection and reconstruction within restricted geometrical regions. We report on an HLT prototype in which the need for HLT-specific components has been reduced to a minimum. Promising results have been obtained with a prototype that includes the key elements of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger selections to this new framework and present the next steps towards a full implementation of the ATLAS trigger.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boveia, Antonio; Buchmueller, Oliver; Busoni, Giorgio
2016-03-14
This document summarises the proposal of the LHC Dark Matter Working Group on how to present LHC results on s-channel simplified dark matter models and to compare them to direct (indirect) detection experiments.
Lead ions and Coulomb’s Law at the LHC (CERN)
NASA Astrophysics Data System (ADS)
Cid-Vidal, Xabier; Cid, Ramon
2018-03-01
Although for most of the time the Large Hadron Collider (LHC) at CERN collides protons, for around one month every year lead ions are collided, to expand the diversity of the LHC research programme. Furthermore, in an effort not originally foreseen, proton-lead collisions are also taking place, with results of high interest to the physics community. All the large experiments of the LHC have now joined the heavy-ion programme, including the LHCb experiment, which was not at first expected to be part of it. The aim of this article is to introduce a few simple physical calculations relating to some electrical phenomena that occur when lead-ion bunches are running in the LHC, using Coulomb’s Law, to be taken to the secondary school classroom to help students understand some important physical concepts.
The long journey to the Higgs boson and beyond at the LHC: Emphasis on ATLAS
NASA Astrophysics Data System (ADS)
Jenni, Peter
2016-09-01
The journey in search for the Higgs boson with the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN started more than two decades ago. But the first discussions motivating the LHC project dream date back even further into the 1980s. This article will recall some of these early historical considerations, mention some of the LHC machine milestones and achievements, focus as an example of a technological challenge on the unique ATLAS superconducting magnet system, and then give an account of the physics results so far, leading to, and featuring particularly, the Higgs boson results, and sketching finally prospects for the future. With its emphasis on the ATLAS experiment it is complementary to the preceding article by Tejinder S. Virdee which focused on the CMS experiment.
The Long Journey to the Higgs Boson and Beyond at the LHC Part II: Emphasis on ATLAS
NASA Astrophysics Data System (ADS)
Jenni, Peter
The journey in search for the Higgs boson with the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN started more than two decades ago. But the first discussions motivating the LHC project dream date back even further into the 1980s. This article will recall some of these early historical considerations, mention some of the LHC machine milestones and achievements, focus as an example of a technological challenge on the unique ATLAS superconducting magnet system, and then give an account of the physics results so far, leading to, and featuring particularly, the Higgs boson results, and sketching finally prospects for the future. With its emphasis on the ATLAS experiment it is complementary to the preceding article by Tejinder S. Virdee which focused on the CMS experiment.
Managing operational documentation in the ALICE Detector Control System
NASA Astrophysics Data System (ADS)
Lechman, M.; Augustinus, A.; Bond, P.; Chochula, P.; Kurepin, A.; Pinazza, O.; Rosinsky, P.
2012-12-01
ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) experiments at CERN in Geneve, Switzerland. The experiment is composed of 18 sub-detectors controlled by an integrated Detector Control System (DCS) that is implemented using the commercial SCADA package PVSSII. The DCS includes over 1200 network devices, over 1,000,000 monitored parameters and numerous custom made software components that are prepared by over 100 developers from all around the world. This complex system is controlled by a single operator via a central user interface. One of his/her main tasks is the recovery of anomalies and errors that may occur during operation. Therefore, clear, complete and easily accessible documentation is essential to guide the shifter through the expert interfaces of different subsystems. This paper describes the idea of the management of the operational documentation in ALICE using a generic repository that is built on a relational database and is integrated with the control system. The experience gained and the conclusions drawn from the project are also presented.
Next Generation Workload Management and Analysis System for Big Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, Kaushik
We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlingtonmore » (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.« less
The development of diamond tracking detectors for the LHC
NASA Astrophysics Data System (ADS)
Adam, W.; Berdermann, E.; Bergonzo, P.; de Boer, W.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Doroshenko, J.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fischer, P.; Fizzotti, F.; Furetta, C.; Gan, K. K.; Ghodbane, N.; Grigoriev, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kaplon, J.; Karl, C.; Kass, R.; Keil, M.; Knöpfle, K. T.; Koeth, T.; Krammer, M.; Logiudice, A.; Lu, R.; mac Lynne, L.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Menichelli, D.; Meuser, S.; Mishina, M.; Moroni, L.; Noomen, J.; Oh, A.; Perera, L.; Pernegger, H.; Pernicka, M.; Polesello, P.; Potenza, R.; Riester, J. L.; Roe, S.; Rudge, A.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Sutera, C.; Trischuk, W.; Tromson, D.; Tuve, C.; Vincenzo, B.; Weilhammer, P.; Wermes, N.; Wetstein, M.; Zeuner, W.; Zoeller, M.; RD42 Collaboration
2003-11-01
Chemical vapor deposition diamond has been discussed extensively as an alternate sensor material for use very close to the interaction region of the LHC where extreme radiation conditions exist. During the last few years diamond devices have been manufactured and tested with LHC electronics with the goal of creating a detector usable by all LHC experiment. Extensive progress on diamond quality, on the development of diamond trackers and on radiation hardness studies has been made. Transforming the technology to the LHC specific requirements is now underway. In this paper we present the recent progress achieved.
ATLAS jet trigger update for the LHC run II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado, A. T.
The CERN Large Hadron Collider is the biggest and most powerful particle collider ever built. It produces up to 40 million proton-proton collisions per second at unprecedented energies to explore the fundamental laws and properties of Nature. The ATLAS experiment is one of the detectors that analyses and records these collisions. It generates dozens of GB/s of data that has to be reduced before it can be permanently stored, the event selection is made by the ATLAS trigger system, which reduces the data volume by a factor of 105. The trigger system has to be highly configurable in order tomore » adapt to changing running conditions and maximize the physics output whilst keeping the output rate under control. A particularly interesting pattern generated during collisions consists of a collimated spray of particles, known as a hadronic jet. To retain the interesting jets and efficiently reject the overwhelming background, optimal jet energy resolution is needed. Therefore the Jet trigger software requires CPU-intensive reconstruction algorithms. In order to reduce the resources needed for the reconstruction step, a partial detector readout scheme was developed, which effectively suppresses the low activity regions of the calorimeter. In this paper we describe the overall ATLAS trigger software, and the jet trigger in particular, along with the improvements made on the system. We then focus on detailed studies of the algorithm timing and the performance impact of the full and partial calorimeter readout schemes. We conclude with an outlook of the jet trigger plans for the next LHC data-taking period. (authors)« less
Data Quality Monitoring System for New GEM Muon Detectors for the CMS Experiment Upgrade
NASA Astrophysics Data System (ADS)
King, Robert; CMS Muon Group Team
2017-01-01
The Gas Electron Multiplier (GEM) detectors are novel detectors designed to improve the muon trigger and tracking performance in CMS experiment for the high luminosity upgrade of the LHC. Partial installation of GEM detectors is planned during the 2016-2017 technical stop. Before the GEM system is installed underground, its data acquisition (DAQ) electronics must be thoroughly tested. The DAQ system includes several commercial and custom-built electronic boards running custom firmware. The front-end electronics are radiation-hard and communicate via optical fibers. The data quality monitoring (DQM) software framework has been designed to provide online verification of the integrity of the data produced by the detector electronics, and to promptly identify potential hardware or firmware malfunctions in the system. Local hits reconstruction and clustering algorithms allow quality control of the data produced by each GEM chamber. Once the new detectors are installed, the DQM will monitor the stability and performance of the system during normal data-taking operations. We discuss the design of the DQM system, the software being developed to read out and process the detector data, and the methods used to identify and report hardware and firmware malfunctions of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, S.; Shipsey, I.; Cavanaugh, R.
To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals ofmore » CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.« less
The operation of the LHC accelerator complex (2/2)
Redaelli, Stefano
2018-05-23
These lectures will give an overview of what happens when the LHC is in running mode. They are aimed at students working on the LHC experiments, but all those who are curious about what happens behind the scenes of the LHC are welcomed. You will learn all you always wanted to know about the LHC, and never had the courage to ask! The only pre-requisite is a basic, college-level, knowledge of EM and of the principles that allow to steer charged beams. Topics covered will include, among others: - the description of the injector chain, from the generation of the protons, to the delivery of bunches to the LHC. - the discussion of the steps required to accelerate the beams in the LHC, to bring them into collision, and to control the luminosity at the interaction points. - the description of the monitoring tools available to the LHC operators, and an explanation of the various plots and panels that can be found on the LHC web pages
The operation of the LHC accelerator complex (1/2)
Redaelli, Stefano
2018-05-23
These lectures will give an overview of what happens when the LHC is in running mode. They are aimed at students working on the LHC experiments, but all those who are curious about what happens behind the scenes of the LHC are welcomed. You will learn all you always wanted to know about the LHC, and never had the courage to ask! The only pre-requisite is a basic, college-level, knowledge of EM and of the principles that allow to steer charged beams. Topics covered will include, among others: - the description of the injector chain, from the generation of the protons, to the delivery of bunches to the LHC. - the discussion of the steps required to accelerate the beams in the LHC, to bring them into collision, and to control the luminosity at the interaction points. - the description of the monitoring tools available to the LHC operators, and an explanation of the various plots and panels that can be found on the LHC web pages.
NASA Astrophysics Data System (ADS)
2014-05-01
WE RECOMMEND Level 3 Extended Project Student Guide A non-specialist, generally useful and nicely put together guide to project work ASE Guide to Research in Science Education Few words wasted in this handy introduction and reference The Science of Starlight Slow but steady DVD covers useful ground SPARKvue Impressive software now available as an app WORTH A LOOK My Inventions and Other Writings Science, engineering, autobiography, visions and psychic phenomena mixed in a strange but revealing concoction The Geek Manifesto: Why Science Matters More enthusiasm than science, but a good motivator and interesting A Big Ball of Fire: Your questions about the Sun answered Free iTunes download made by and for students goes down well APPS Collider visualises LHC experiments ... Science Museum app enhances school trips ... useful information for the Cambridge Science Festival
A hardware fast tracker for the ATLAS trigger
NASA Astrophysics Data System (ADS)
Asbah, Nedaa
2016-09-01
The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.
Readiness of the ATLAS liquid argon calorimeter for LHC collisions
NASA Astrophysics Data System (ADS)
Aad, G.; Abbott, B.; Abdallah, J.; Abdelalim, A. A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acharya, B. S.; Adams, D. L.; Addy, T. N.; Adelman, J.; Adorisio, C.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J. A.; Aharrouche, M.; Ahlen, S. P.; Ahles, F.; Ahmad, A.; Ahmed, H.; Ahsan, M.; Aielli, G.; Akdogan, T.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Aktas, A.; Alam, M. S.; Alam, M. A.; Albert, J.; Albrand, S.; Aleksa, M.; Aleksandrov, I. N.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alviggi, M. G.; Amako, K.; Amelung, C.; Ammosov, V. V.; Amorim, A.; Amorós, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C. F.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antunovic, B.; Anulli, F.; Aoun, S.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Archambault, J. P.; Arfaoui, S.; Arguin, J.-F.; Argyropoulos, T.; Arik, E.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Arutinov, D.; Asai, M.; Asai, S.; Asfandiyarov, R.; Ask, S.; Åsman, B.; Asner, D.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Auerbach, B.; Auge, E.; Augsten, K.; Aurousseau, M.; Austin, N.; Avolio, G.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baccaglioni, G.; Bacci, C.; Bach, A.; Bachacou, H.; Bachas, K.; Backes, M.; Badescu, E.; Bagnaia, P.; Bai, Y.; Bailey, D. C.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, M. D.; Dos Santos Pedrosa, F. Baltasar; Banas, E.; Banerjee, P.; Banerjee, S.; Banfi, D.; Bangert, A.; Bansal, V.; Baranov, S. P.; Baranov, S.; Barashkou, A.; Barber, T.; Barberio, E. L.; Barberis, D.; Barbero, M.; Bardin, D. Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Baron, S.; Baroncelli, A.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Barrillon, P.; Barros, N.; Bartoldus, R.; Bartsch, D.; Bastos, J.; Bates, R. L.; Bathe, S.; Batkova, L.; Batley, J. R.; Battaglia, A.; Battistin, M.; Bauer, F.; Bawa, H. S.; Bazalova, M.; Beare, B.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Becerici, N.; Bechtle, P.; Beck, G. A.; Beck, H. P.; Beckingham, M.; Becks, K. H.; Bedajanek, I.; Beddall, A. J.; Beddall, A.; Bednár, P.; Bednyakov, V. A.; Bee, C.; Begel, M.; Behar Harpaz, S.; Behera, P. K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Bendel, M.; Benedict, B. H.; Benekos, N.; Benhammou, Y.; Benincasa, G. P.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernardet, K.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Besson, N.; Bethke, S.; Bianchi, R. M.; Bianco, M.; Biebel, O.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K. M.; Blair, R. E.; Blanchard, J.-B.; Blanchot, G.; Blocker, C.; Blocki, J.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bocci, A.; Boehler, M.; Boek, J.; Boelaert, N.; Böser, S.; Bogaerts, J. A.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A.; Bondarenko, V. G.; Bondioli, M.; Boonekamp, M.; Booth, J. R. A.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Bosteels, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E. V.; Boulahouache, C.; Bourdarios, C.; Boyd, J.; Boyko, I. R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Branchini, P.; Brandenburg, G. W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Breton, D.; Brett, N. D.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brodbeck, T. J.; Brodet, E.; Broggi, F.; Bromberg, C.; Brooijmans, G.; Brooks, W. K.; Brown, G.; Brubaker, E.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Buanes, T.; Bucci, F.; Buchanan, J.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Budick, B.; Büscher, V.; Bugge, L.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C. P.; Butin, F.; Butler, B.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Byatt, T.; Caballero, J.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Caloi, R.; Calvet, D.; Camarri, P.; Cambiaghi, M.; Cameron, D.; Campabadal Segura, F.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Caracinha, D.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carrillo Montoya, G. D.; Carron Montero, S.; Carter, A. A.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Cascella, M.; Caso, C.; Castaneda Hernadez, A. M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N.; Cataldi, G.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caughron, S.; Cauz, D.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S. A.; Cevenini, F.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapman, J. D.; Chapman, J. W.; Chareyre, E.; Charlton, D. G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chen, H.; Chen, S.; Chen, T.; Chen, X.; Cheng, S.; Cheplakov, A.; Chepurnov, V. F.; Cherkaoui El Moursli, R.; Tcherniatine, V.; Chesneanu, D.; Cheu, E.; Cheung, S. L.; Chevalier, L.; Chevallier, F.; Chiarella, V.; Chiefari, G.; Chikovani, L.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chizhov, M.; Choudalakis, G.; Chouridou, S.; Chren, D.; Christidi, I. A.; Christov, A.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M. D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Citterio, M.; Clark, A.; Cleland, W.; Clemens, J. C.; Clement, B.; Clement, C.; Clements, D.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coelli, S.; Coggeshall, J.; Cogneras, E.; Cojocaru, C. D.; Colas, J.; Cole, B.; Colijn, A. P.; Collard, C.; Collins, N. J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Coluccia, R.; Conde Muiño, P.; Coniavitis, E.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cook, J.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Costin, T.; Côté, D.; Coura Torres, R.; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B. E.; Cranmer, K.; Cranshaw, J.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crépé-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Curatolo, M.; Curtis, C. J.; Cwetanski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; da Silva, P. V. M.; da Via, C.; Dabrowski, W.; Dai, T.; Dallapiccola, C.; Dallison, S. J.; Daly, C. H.; Dam, M.; Danielsson, H. O.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G. L.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davison, A. R.; Dawson, I.; Dawson, J. W.; Daya, R. K.; de, K.; de Asmundis, R.; de Castro, S.; de Castro Faria Salgado, P. E.; de Cecco, S.; de Graat, J.; de Groot, N.; de Jong, P.; de La Cruz-Burelo, E.; de La Taille, C.; de Mora, L.; de Oliveira Branco, M.; de Pedis, D.; de Salvo, A.; de Sanctis, U.; de Santo, A.; de Vivie de Regie, J. B.; de Zorzi, G.; Dean, S.; Deberg, H.; Dedes, G.; Dedovich, D. V.; Defay, P. O.; Degenhardt, J.; Dehchar, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; Della Volpe, D.; Delmastro, M.; Delruelle, N.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Deng, W.; Denisov, S. P.; Dennis, C.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P. O.; Dewhurst, A.; Dewilde, B.; Dhaliwal, S.; Dhullipudi, R.; di Ciaccio, A.; di Ciaccio, L.; di Domenico, A.; di Girolamo, A.; di Girolamo, B.; di Luise, S.; di Mattia, A.; di Nardo, R.; di Simone, A.; di Sipio, R.; Diaz, M. A.; Diblen, F.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dindar Yagci, K.; Dingfelder, D. J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; Do Vale, M. A. B.; Do Valle Wemans, A.; Dobbs, M.; Dobos, D.; Dobson, E.; Dobson, M.; Dodd, J.; Dogan, O. B.; Doherty, T.; Doi, Y.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B. A.; Dohmae, T.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M. T.; Doxiadis, A.; Doyle, A. T.; Drasal, Z.; Driouichi, C.; Dris, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Dührssen, M.; Duflot, L.; Dufour, M.-A.; Dunford, M.; Duperrin, A.; Duran Yildiz, H.; Dushkin, A.; Duxfield, R.; Dwuznik, M.; Düren, M.; Ebenstein, W. L.; Ebke, J.; Eckert, S.; Eckweiler, S.; Edmonds, K.; Edwards, C. A.; Eerola, P.; Egorov, K.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Ely, R.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Epshteyn, V. S.; Ereditato, A.; Eriksson, D.; Ermoline, I.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienne, F.; Etienvre, A. I.; Etzion, E.; Evans, H.; Fabbri, L.; Fabre, C.; Faccioli, P.; Facius, K.; Fakhrutdinov, R. M.; Falciano, S.; Falou, A. C.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Fayard, L.; Fayette, F.; Febbraro, R.; Federic, P.; Fedin, O. L.; Fedorko, I.; Fedorko, W.; Feligioni, L.; Felzmann, C. U.; Feng, C.; Feng, E. J.; Fenyuk, A. B.; Ferencei, J.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M. L.; Ferrere, D.; Ferretti, C.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, G.; Fisher, M. J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L. R.; Flowerdew, M. J.; Föhlisch, F.; Fokitis, M.; Fonseca Martin, T.; Forbush, D. A.; Formica, A.; Forti, A.; Fortin, D.; Foster, J. M.; Fournier, D.; Foussat, A.; Fowler, A. J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; Freestone, J.; French, S. T.; Froeschl, R.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E. J.; Gallas, M. V.; Gallop, B. J.; Gallus, P.; Galyaev, E.; Gan, K. K.; Gao, Y. S.; Gaponenko, A.; Garcia-Sciveres, M.; García, C.; García Navarro, J. E.; Gardner, R. W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaumer, O.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gayde, J.-C.; Gazis, E. N.; Ge, P.; Gee, C. N. P.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Gentile, S.; Georgatos, F.; George, S.; Gerlach, P.; Gershon, A.; Geweniger, C.; Ghazlane, H.; Ghez, P.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S. M.; Gilbert, L. M.; Gilchriese, M.; Gilewsky, V.; Gillberg, D.; Gillman, A. R.; Gingrich, D. M.; Ginzburg, J.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giovannini, P.; Giraud, P. F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B. K.; Gladilin, L. K.; Glasman, C.; Glazov, A.; Glitza, K. W.; Glonti, G. L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Göpfert, T.; Goeringer, C.; Gössling, C.; Göttfert, T.; Goggi, V.; Goldfarb, S.; Goldin, D.; Golling, T.; Gollub, N. P.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Gonella, L.; Gong, C.; González de La Hoz, S.; Gonzalez Silva, M. L.; Gonzalez-Sevilla, S.; Goodson, J. J.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goryachev, S. V.; Goryachev, V. N.; Gosdzik, B.; Gosselink, M.; Gostkin, M. I.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Granado Cardoso, L.; Grancagnolo, F.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H. M.; Gray, J. A.; Graziani, E.; Green, B.; Greenshaw, T.; Greenwood, Z. D.; Gregor, I. M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Grishkevich, Y. V.; Groer, L. S.; Grognuz, J.; Groh, M.; Groll, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Grybel, K.; Guarino, V. J.; Guicheney, C.; Guida, A.; Guillemin, T.; Guler, H.; Gunther, J.; Guo, B.; Gupta, A.; Gusakov, Y.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haas, S.; Haber, C.; Hackenburg, R.; Hadavand, H. K.; Hadley, D. R.; Haefner, P.; Härtel, R.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamilton, A.; Hamilton, S.; Han, H.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, J. R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hansl-Kozanecka, T.; Hansson, P.; Hara, K.; Hare, G. A.; Harenberg, T.; Harrington, R. D.; Harris, O. B.; Harris, O. M.; Harrison, K.; Hartert, J.; Hartjes, F.; Haruyama, T.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hashemi, K.; Hassani, S.; Hatch, M.; Haug, F.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, D.; Hayakawa, T.; Hayward, H. S.; Haywood, S. J.; He, M.; Head, S. J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heller, M.; Hellman, S.; Helsens, C.; Hemperek, T.; Henderson, R. C. W.; Henke, M.; Henrichs, A.; Correia, A. M. Henriques; Henrot-Versille, S.; Hensel, C.; Henß, T.; Hershenhorn, A. D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N. P.; Hidvegi, A.; Higón-Rodriguez, E.; Hill, D.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hirose, M.; Hirsch, F.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holmgren, S. O.; Holy, T.; Holzbauer, J. L.; Homma, Y.; Homola, P.; Horazdovsky, T.; Hori, T.; Horn, C.; Horner, S.; Horvat, S.; Hostachy, J.-Y.; Hou, S.; Houlden, M. A.; Hoummada, A.; Howe, T.; Hrivnac, J.; Hryn'ova, T.; Hsu, P. J.; Hsu, S.-C.; Huang, G. S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Hughes, E. W.; Hughes, G.; Hughes-Jones, R. E.; Hurst, P.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilyushenka, Y.; Imori, M.; Ince, T.; Ioannou, P.; Iodice, M.; Irles Quiles, A.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Isobe, T.; Issakov, V.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A. V.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, J. N.; Jackson, P.; Jaekel, M.; Jahoda, M.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D.; Jansen, E.; Jantsch, A.; Janus, M.; Jared, R. C.; Jarlskog, G.; Jarron, P.; Jeanty, L.; Jelen, K.; Jen-La Plante, I.; Jenni, P.; Jez, P.; Jézéquel, S.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, G.; Jin, S.; Jinnouchi, O.; Joffe, D.; Johansen, M.; Johansson, K. E.; Johansson, P.; Johnert, S.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. W.; Jones, T. J.; Jonsson, O.; Joos, D.; Joram, C.; Jorge, P. M.; Juranek, V.; Jussel, P.; Kabachenko, V. V.; Kabana, S.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinovskaya, L. V.; Kalinowski, A.; Kama, S.; Kanaya, N.; Kaneda, M.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Karagounis, M.; Karagoz Unel, M.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasmi, A.; Kass, R. D.; Kastanas, A.; Kastoryano, M.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M. S.; Kayumov, F.; Kazanin, V. A.; Kazarinov, M. Y.; Kazi, S. I.; Keates, J. R.; Keeler, R.; Keener, P. T.; Kehoe, R.; Keil, M.; Kekelidze, G. D.; Kelly, M.; Kennedy, J.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Khakzad, M.; Khalil-Zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Kholodenko, A. G.; Khomich, A.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kilvington, G.; Kim, H.; Kim, M. S.; Kim, P. C.; Kim, S. H.; Kind, O.; Kind, P.; King, B. T.; Kirk, J.; Kirsch, G. P.; Kirsch, L. E.; Kiryunin, A. E.; Kisielewska, D.; Kittelmann, T.; Kiyamura, H.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E. B.; Klioutchnikova, T.; Klok, P. F.; Klous, S.; Kluge, E.-E.; Kluge, T.; Kluit, P.; Klute, M.; Kluth, S.; Knecht, N. S.; Kneringer, E.; Ko, B. R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Köneke, K.; König, A. C.; Köpke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Kokott, T.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kolos, S.; Kolya, S. D.; Komar, A. A.; Komaragiri, J. R.; Kondo, T.; Kono, T.; Kononov, A. I.; Konoplich, R.; Konovalov, S. P.; Konstantinidis, N.; Koperny, S.; Korcyl, K.; Kordas, K.; Koreshev, V.; Korn, A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kostka, P.; Kostyukhin, V. V.; Kotamäki, M. J.; Kotov, S.; Kotov, V. M.; Kotov, K. Y.; Koupilova, Z.; Kourkoumelis, C.; Koutsman, A.; Kowalewski, R.; Kowalski, H.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasny, M. W.; Krasznahorkay, A.; Kreisel, A.; Krejci, F.; Krepouri, A.; Kretzschmar, J.; Krieger, P.; Krobath, G.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumshteyn, Z. V.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kus, V.; Kuykendall, W.; Kuznetsova, E.; Kvasnicka, O.; Kwee, R.; La Rosa, M.; La Rotonda, L.; Labarga, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lamanna, M.; Lampen, C. L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M. P. J.; Lane, J. L.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Larionov, A. V.; Larner, A.; Lasseur, C.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Laycock, P.; Lazarev, A. B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Maner, C.; Le Menedeu, E.; Le Vine, M.; Leahu, M.; Lebedev, A.; Lebel, C.; Lecompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lefebvre, M.; Legendre, M.; Legeyt, B. C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leitner, R.; Lelas, D.; Lellouch, D.; Lellouch, J.; Leltchouk, M.; Lendermann, V.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leroy, C.; Lessard, J.-R.; Lester, C. G.; Leung Fook Cheong, A.; Levêque, J.; Levin, D.; Levinson, L. J.; Levitski, M. S.; Levonian, S.; Lewandowska, M.; Leyton, M.; Li, H.; Li, J.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Liko, D.; Lilley, J. N.; Lim, H.; Limosani, A.; Limper, M.; Lin, S. C.; Lindsay, S. W.; Linhart, V.; Linnemann, J. T.; Liolios, A.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T. M.; Lissauer, D.; Litke, A. M.; Liu, C.; Liu, D.; Liu, H.; Liu, J. B.; Liu, M.; Liu, S.; Liu, T.; Liu, Y.; Livan, M.; Lleres, A.; Lloyd, S. L.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F. K.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Loken, J.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Losty, M. J.; Lou, X.; Lounis, A.; Loureiro, K. F.; Lovas, L.; Love, J.; Love, P.; Lowe, A. J.; Lu, F.; Lu, J.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Ludwig, J.; Luehring, F.; Luisa, L.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lutz, G.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Maček, B.; Miguens, J. Machado; Mackeprang, R.; Madaras, R. J.; Mader, W. F.; Maenner, R.; Maeno, T.; Mättig, P.; Mättig, S.; Magalhaes Martins, P. J.; Magradze, E.; Magrath, C. A.; Mahalalel, Y.; Mahboubi, K.; Mahmood, A.; Mahout, G.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makouski, M.; Makovec, N.; Malecki, Pa.; Malecki, P.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mambelli, M.; Mameghani, R.; Mamuzic, J.; Manabe, A.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Mangeard, P. S.; Manjavidze, I. D.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J. F.; Marchese, F.; Marcisovsky, M.; Marino, C. P.; Marques, C. N.; Marroquim, F.; Marshall, R.; Marshall, Z.; Martens, F. K.; Marti I Garcia, S.; Martin, A. J.; Martin, A. J.; Martin, B.; Martin, B.; Martin, F. F.; Martin, J. P.; Martin, T. A.; Martin Dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martini, A.; Martynenko, V.; Martyniuk, A. C.; Maruyama, T.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massaro, G.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Mathes, M.; Matricon, P.; Matsumoto, H.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maxfield, S. J.; May, E. N.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; Mazzanti, P.; Mc Donald, J.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCubbin, N. A.; McFarlane, K. W.; McGlone, H.; McHedlidze, G.; McLaren, R. A.; McMahon, S. J.; McMahon, T. R.; McPherson, R. A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T. M.; Mehdiyev, R.; Mehlhase, S.; Mehta, A.; Meier, K.; Meirose, B.; Melamed-Katz, A.; Mellado Garcia, B. R.; Meng, Z.; Menke, S.; Meoni, E.; Merkl, D.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A. M.; Messmer, I.; Metcalfe, J.; Mete, A. S.; Meyer, J.-P.; Meyer, J.; Meyer, T. C.; Meyer, W. T.; Miao, J.; Micu, L.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikuž, M.; Miller, D. W.; Mills, W. J.; Mills, C. M.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Miñano, M.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Mir, L. M.; Mirabelli, G.; Misawa, S.; Miscetti, S.; Misiejuk, A.; Mitrevski, J.; Mitsou, V. A.; Miyagawa, P. S.; Mjörnmark, J. U.; Mladenov, D.; Moa, T.; Mockett, P.; Moed, S.; Moeller, V.; Mönig, K.; Möser, N.; Mohn, B.; Mohr, W.; Mohrdieck-Möck, S.; Moles-Valls, R.; Molina-Perez, J.; Moloney, G.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Moore, R. W.; Herrera, C. Mora; Moraes, A.; Morais, A.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morii, M.; Morley, A. K.; Mornacchi, G.; Morozov, S. V.; Morris, J. D.; Moser, H. G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Müller, T. A.; Muenstermann, D.; Muir, A.; Murillo Garcia, R.; Murray, W. J.; Mussche, I.; Musto, E.; Myagkov, A. G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A. M.; Nakamura, K.; Nakano, I.; Nakatsuka, H.; Nanava, G.; Napier, A.; Nash, M.; Nation, N. R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nderitu, S. K.; Neal, H. A.; Nebot, E.; Nechaeva, P.; Negri, A.; Negri, G.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neusiedl, A.; Neves, R. N.; Nevski, P.; Newcomer, F. M.; Nicholson, C.; Nickerson, R. B.; Nicolaidou, R.; Nicolas, L.; Nicoletti, G.; Niedercorn, F.; Nielsen, J.; Nikiforov, A.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nomoto, H.; Nordberg, M.; Nordkvist, B.; Notz, D.; Novakova, J.; Nozaki, M.; Nožička, M.; Nugent, I. M.; Nuncio-Quiroz, A.-E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Odino, G. A.; Ogren, H.; Oh, S. H.; Ohm, C. C.; Ohshima, T.; Ohshita, H.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Olcese, M.; Olchevski, A. G.; Oliveira, M.; Oliveira Damazio, D.; Oliver, J.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Ordonez, G.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R. S.; Ortega, E. O.; Osculati, B.; Osuna, C.; Otec, R.; P Ottersbach, J.; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Ozcan, V. E.; Ozone, K.; Ozturk, N.; Pacheco Pages, A.; Padhi, S.; Padilla Aranda, C.; Paganis, E.; Pahl, C.; Paige, F.; Pajchel, K.; Pal, A.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Papadopoulou, Th. D.; Park, S. J.; Park, W.; Parker, M. A.; Parker, S. I.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passardi, G.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Pater, J. R.; Patricelli, S.; Patwa, A.; Pauly, T.; Peak, L. S.; Pecsy, M.; Pedraza Morales, M. I.; Peleganchuk, S. V.; Peng, H.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Perrodo, P.; Persembe, S.; Perus, P.; Peshekhonov, V. D.; Petersen, B. A.; Petersen, J.; Petersen, T. C.; Petit, E.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D.; Petteni, M.; Pezoa, R.; Pfeifer, B.; Phan, A.; Phillips, A. W.; Piacquadio, G.; Piccinini, M.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Ping, J.; Pinto, B.; Pirotte, O.; Pizio, C.; Placakyte, R.; Plamondon, M.; Plano, W. G.; Pleier, M.-A.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poffenberger, P.; Poggioli, L.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomarede, D. M.; Pomeroy, D.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popovic, D. S.; Poppleton, A.; Popule, J.; Portell Bueso, X.; Porter, R.; Pospelov, G. E.; Pospichal, P.; Pospisil, S.; Potekhin, M.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Potter, K. P.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Preda, T.; Pretzl, K.; Pribyl, L.; Price, D.; Price, L. E.; Prichard, P. M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Puigdengoles, C.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qi, M.; Qian, J.; Qian, W.; Qian, Z.; Qin, Z.; Qing, D.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Quinonez, F.; Raas, M.; Radeka, V.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A. M.; Rahm, D.; Rajagopalan, S.; Rammes, M.; Ratoff, P. N.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A. L.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinherz-Aronis, E.; Reinsch, A.; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z. L.; Renkel, P.; Rescia, S.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richards, R. A.; Richter, D.; Richter, R.; Richter-Was, E.; Ridel, M.; Rieke, S.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R. R.; Riu, I.; Rivoltella, G.; Rizatdinova, F.; Rizvi, E. R.; Roa Romero, D. A.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, M.; Robson, A.; Rocha de Lima, J. G.; Roda, C.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, S.; Røhne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V. M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rosenbaum, G. A.; Rosenberg, E. I.; Rosselet, L.; Rossi, L. P.; Rotaru, M.; Rothberg, J.; Rottländer, I.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Ruckert, B.; Ruckstuhl, N.; Rud, V. I.; Rudolph, G.; Rühr, F.; Ruggieri, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rusakovich, N. A.; Rutherfoord, J. P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y. F.; Ryadovikov, V.; Ryan, P.; Rybkin, G.; Rzaeva, S.; Saavedra, A. F.; Sadrozinski, H. F.-W.; Sadykov, R.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B. M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Samset, B. H.; Sanchis Lozano, M. A.; Sandaker, H.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandstroem, R.; Sandvoss, S.; Sankey, D. P. C.; Sanny, B.; Sansoni, A.; Santamarina Rios, C.; Santi, L.; Santoni, C.; Santonico, R.; Santos, D.; Santos, J.; Saraiva, J. G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sasaki, O.; Sasaki, T.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Savard, P.; Savine, A. Y.; Savinov, V.; Sawyer, L.; Saxon, D. H.; Says, L. P.; Sbarra, C.; Sbrizzi, A.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schäfer, U.; Schaetzel, S.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schamov, A. G.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schlereth, J. L.; Schmid, P.; Schmidt, M. P.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schreiner, A.; Schroeder, C.; Schroer, N.; Schroers, M.; Schuler, G.; Schultes, J.; Schultz-Coulon, H.-C.; Schumacher, J.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W. G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Seliverstov, D. M.; Sellden, B.; Seman, M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M. E.; Sfyrla, A.; Shamim, M.; Shan, L. Y.; Shank, J. T.; Shao, Q. T.; Shapiro, M.; Shatalov, P. B.; Shaver, L.; Shaw, C.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M. J.; Shupe, M. A.; Sicho, P.; Sidoti, A.; Siebel, A.; Siegert, F.; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjoelin, J.; Sjursen, T. B.; Skubic, P.; Skvorodnev, N.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloper, J.; Sluka, T.; Smakhtin, V.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, B. C.; Smith, D.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snow, S. W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C. A.; Solar, M.; Solfaroli Camillocci, E.; Solodkov, A. A.; Solovyanov, O. V.; Soluk, R.; Sondericker, J.; Sopko, V.; Sopko, B.; Sosebee, M.; Sosnovtsev, V. V.; Sospedra Suay, L.; Soukharev, A.; Spagnolo, S.; Spanò, F.; Speckmayer, P.; Spencer, E.; Spighi, R.; Spigo, G.; Spila, F.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; Denis, R. D. St.; Stahl, T.; Stamen, R.; Stancu, S. N.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Stastny, J.; Staude, A.; Stavina, P.; Stavropoulos, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G.; Stockton, M. C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Strong, J. A.; Stroynowski, R.; Strube, J.; Stugu, B.; Stumer, I.; Soh, D. A.; Su, D.; Suchkov, S. I.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M. R.; Suzuki, T.; Suzuki, Y.; Sviridov, Yu. M.; Sykora, I.; Sykora, T.; Szymocha, T.; Sánchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M. C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tappern, G. P.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tassi, E.; Taylor, C.; Taylor, F. E.; Taylor, G. N.; Taylor, R. P.; Taylor, W.; Teixeira-Dias, P.; Ten Kate, H.; Teng, P. K.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R. J.; Tevlin, C. M.; Thadome, J.; Thananuwong, R.; Thioye, M.; Thoma, S.; Thomas, J. P.; Thomas, T. L.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomson, E.; Thun, R. P.; Tic, T.; Tikhomirov, V. O.; Tikhonov, Y. A.; Timmermans, C. J. W. P.; Tipton, P.; Tique Aires Viegas, F. J.; Tisserant, S.; Tobias, J.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tomasek, L.; Tomasek, M.; Tomasz, F.; Tomoto, M.; Tompkins, D.; Tompkins, L.; Toms, K.; Tong, G.; Tonoyan, A.; Topfel, C.; Topilin, N. D.; Torrence, E.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Tovey, S. N.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Trinh, T. N.; Tripiana, M. F.; Triplett, N.; Trivedi, A.; Trocmé, B.; Troncon, C.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiafis, I.; Tsiakiris, M.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsung, J.-W.; Tsuno, S.; Tsybychev, D.; Turala, M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Tuts, P. M.; Twomey, M. S.; Tylmad, M.; Tyndel, M.; Tzanakos, G.; Uchida, K.; Ueda, I.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Underwood, D. G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valenta, J.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; van Berg, R.; van der Graaf, H.; van der Kraaij, E.; van der Poel, E.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vandoni, G.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Varela Rodriguez, F.; Vari, R.; Varnes, E. W.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasilyeva, L.; Vassilakopoulos, V. I.; Vazeille, F.; Vegni, G.; Veillet, J. J.; Vellidis, C.; Veloso, F.; Veness, R.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vetterli, M. C.; Vichou, I.; Vickey, T.; Viehhauser, G. H. A.; Villa, M.; Villani, E. G.; Villaplana Perez, M.; Villate, J.; Vilucchi, E.; Vincter, M. G.; Vinek, E.; Vinogradov, V. B.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O. V.; Vivarelli, I.; Vives Vaques, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogt, H.; Vokac, P.; Volpi, M.; Volpini, G.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobiev, A. P.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T. T.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vudragovic, D.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Wahlen, H.; Walbersloh, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Wang, C.; Wang, H.; Wang, J.; Wang, J. C.; Wang, S. M.; Ward, C. P.; Warsinsky, M.; Wastie, R.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, A. T.; Waugh, B. M.; Webel, M.; Weber, J.; Weber, M. D.; Weber, M.; Weber, M. S.; Weber, P.; Weidberg, A. R.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P. S.; Wen, M.; Wenaus, T.; Wendler, S.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Werthenbach, U.; Wessels, M.; Whalen, K.; Wheeler-Ellis, S. J.; Whitaker, S. P.; White, A.; White, M. J.; White, S.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L. A. M.; Wildauer, A.; Wildt, M. A.; Wilhelm, I.; Wilkens, H. G.; Williams, E.; Williams, H. H.; Willis, W.; Willocq, S.; Wilson, J. A.; Wilson, M. G.; Wilson, A.; Wingerter-Seez, I.; Winklmeier, F.; Wittgen, M.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wraight, K.; Wright, C.; Wright, D.; Wrona, B.; Wu, S. L.; Wu, X.; Wulf, E.; Xella, S.; Xie, S.; Xie, Y.; Xu, D.; Xu, N.; Yamada, M.; Yamamoto, A.; Yamamoto, S.; Yamamura, T.; Yamanaka, K.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U. K.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S. P.; Yu, D.; Yu, J.; Yu, M.; Yu, X.; Yuan, J.; Yuan, L.; Yurkewicz, A.; Zaidan, R.; Zaitsev, A. M.; Zajacova, Z.; Zambrano, V.; Zanello, L.; Zarzhitsky, P.; Zaytsev, A.; Zeitnitz, C.; Zeller, M.; Zema, P. F.; Zemla, A.; Zendler, C.; Zenin, O.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi Della Porta, G.; Zhan, Z.; Zhang, H.; Zhang, J.; Zhang, Q.; Zhang, X.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zheng, S.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C. G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zilka, B.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Zitoun, R.; Živković, L.; Zmouchko, V. V.; Zobernig, G.; Zoccoli, A.; Zur Nedden, M.; Zutshi, V.
2010-12-01
The ATLAS liquid argon calorimeter has been operating continuously since August 2006. At this time, only part of the calorimeter was readout, but since the beginning of 2008, all calorimeter cells have been connected to the ATLAS readout system in preparation for LHC collisions. This paper gives an overview of the liquid argon calorimeter performance measured in situ with random triggers, calibration data, cosmic muons, and LHC beam splash events. Results on the detector operation, timing performance, electronics noise, and gain stability are presented. High energy deposits from radiative cosmic muons and beam splash events allow to check the intrinsic constant term of the energy resolution. The uniformity of the electromagnetic barrel calorimeter response along η (averaged over φ) is measured at the percent level using minimum ionizing cosmic muons. Finally, studies of electromagnetic showers from radiative muons have been used to cross-check the Monte Carlo simulation. The performance results obtained using the ATLAS readout, data acquisition, and reconstruction software indicate that the liquid argon calorimeter is well-prepared for collisions at the dawn of the LHC era.
Overview of LHC physics results at ICHEP
Mangano, Michelangelo
2018-06-20
This month LHC physics day will review the physics results presented by the LHC experiments at the 2010 ICHEP in Paris. The experimental presentations will be preceeded by the bi-weekly LHC accelerator status report.The meeting will be broadcast via EVO (detailed info will appear at the time of the meeting in the "Video Services" item on the left menu bar). For those attending, information on accommodation, access to CERN and laptop registration is available from http://cern.ch/lpcc/visits
Overview of LHC physics results at ICHEP
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-02-25
This month LHC physics day will review the physics results presented by the LHC experiments at the 2010 ICHEP in Paris. The experimental presentations will be preceeded by the bi-weekly LHC accelerator status report.The meeting will be broadcast via EVO (detailed info will appear at the time of the meeting in the "Video Services" item on the left menu bar)For those attending, information on accommodation, access to CERN and laptop registration is available from http://cern.ch/lpcc/visits
Calibration techniques and strategies for the present and future LHC electromagnetic calorimeters
NASA Astrophysics Data System (ADS)
Aleksa, M.
2018-02-01
This document describes the different calibration strategies and techniques applied by the two general purpose experiments at the LHC, ATLAS and CMS, and discusses them underlining their respective strengths and weaknesses from the view of the author. The resulting performances of both calorimeters are described and compared on the basis of selected physics results. Future upgrade plans for High Luminosity LHC (HL-LHC) are briefly introduced and planned calibration strategies for the upgraded detectors are shown.
Turning the LHC ring into a new physics search machine
NASA Astrophysics Data System (ADS)
Orava, Risto
2017-03-01
The LHC Collider Ring is proposed to be turned into an ultimate automatic search engine for new physics in four consecutive phases: (1) Searches for heavy particles produced in Central Exclusive Process (CEP): pp → p + X + p based on the existing Beam Loss Monitoring (BLM) system of the LHC; (2) Feasibility study of using the LHC Ring as a gravitation wave antenna; (3) Extensions to the current BLM system to facilitate precise registration of the selected CEP proton exit points from the LHC beam vacuum chamber; (4) Integration of the BLM based event tagging system together with the trigger/data acquisition systems of the LHC experiments to facilitate an on-line automatic search machine for the physics of tomorrow.
Results of searches for extra spatial dimensions in the CMS experiment at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shmatov, S. V., E-mail: Sergei.Shmatov@cern.ch
2016-03-15
An overview of basic results of the CMS experiment that concern searches for signals from extra spatial dimensions in the course of the first run of the Large Hadron Collider (LHC) at the c.m. proton–proton collision energies of 00000 and 8 TeV is given.
Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abat, E.; Abbott, B.
2011-11-28
The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. Inmore » this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of the expression 'CSC studies' ('computing system commissioning'), which is occasionally referred to in these volumes. The work reported does generally assume that the detector is fully operational, and in this sense represents an idealised detector: establishing the best performance of the ATLAS detector with LHC proton-proton collisions is a challenging task for the future. The results summarised here therefore represent the best estimate of ATLAS capabilities before real operational experience of the full detector with beam. Unless otherwise stated, simulations also do not include the effect of additional interactions in the same or other bunch-crossings, and the effect of neutron background is neglected. Thus simulations correspond to the low-luminosity performance of the ATLAS detector. This report is broadly divided into two parts: firstly the performance for identification of physics objects is examined in detail, followed by a detailed assessment of the performance of the trigger system. This part is subdivided into chapters surveying the capabilities for charged particle tracking, each of electron/photon, muon and tau identification, jet and missing transverse energy reconstruction, b-tagging algorithms and performance, and finally the trigger system performance. In each chapter of the report, there is a further subdivision into shorter notes describing different aspects studied. The second major subdivision of the report addresses physics measurement capabilities, and new physics search sensitivities. Individual chapters in this part discuss ATLAS physics capabilities in Standard Model QCD and electroweak processes, in the top quark sector, in b-physics, in searches for Higgs bosons, supersymmetry searches, and finally searches for other new particles predicted in more exotic models.« less
Physics Goals and Experimental Challenges of the Proton-Proton High-Luminosity Operation of the LHC
NASA Astrophysics Data System (ADS)
Campana, P.; Klute, M.; Wells, P. S.
2016-10-01
The completion of Run 1 of the Large Hadron Collider (LHC) at CERN has seen the discovery of the Higgs boson and an unprecedented number of precise measurements of the Standard Model, and Run 2 has begun to provide the first data at higher energy. The high-luminosity upgrade of the LHC (HL-LHC) and the four experiments (ATLAS, CMS, ALICE, and LHCb) will exploit the full potential of the collider to discover and explore new physics beyond the Standard Model. We review the experimental challenges and the physics opportunities in proton-proton collisions at the HL-LHC.
Abort Gap Cleaning for LHC Run 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uythoven, Jan; Boccardi, Andrea; Bravin, Enrico
2014-07-01
To minimize the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to themore » applied cleaning algorithms.« less
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
NASA Astrophysics Data System (ADS)
Adam, W.; Bergauer, T.; Brondolin, E.; Dragicevic, M.; Friedl, M.; Frühwirth, R.; Hoch, M.; Hrubec, J.; König, A.; Steininger, H.; Waltenberger, W.; Alderweireldt, S.; Beaumont, W.; Janssen, X.; Lauwers, J.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Beghin, D.; Brun, H.; Clerbaux, B.; Delannoy, H.; De Lentdecker, G.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, Th.; Léonard, A.; Luetic, J.; Postiau, N.; Seva, T.; Vanlaer, P.; Vannerom, D.; Wang, Q.; Zhang, F.; Abu Zeid, S.; Blekman, F.; De Bruyn, I.; De Clercq, J.; D'Hondt, J.; Deroover, K.; Lowette, S.; Moortgat, S.; Moreels, L.; Python, Q.; Skovpen, K.; Van Mulders, P.; Van Parijs, I.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; Delaere, C.; Delcourt, M.; De Visscher, S.; Francois, B.; Giammanco, A.; Jafari, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Michotte, D.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Szilasi, N.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Härkönen, J.; Lampén, T.; Luukka, P.; Peltola, T.; Tuominen, E.; Tuovinen, E.; Eerola, P.; Tuuva, T.; Baulieu, G.; Boudoul, G.; Caponetto, L.; Combaret, C.; Contardo, D.; Dupasquier, T.; Gallbit, G.; Lumb, N.; Mirabito, L.; Perries, S.; Vander Donckt, M.; Viret, S.; Agram, J.-L.; Andrea, J.; Bloch, D.; Bonnin, C.; Brom, J.-M.; Chabert, E.; Chanon, N.; Charles, L.; Conte, E.; Fontaine, J.-Ch.; Gross, L.; Hosselet, J.; Jansova, M.; Tromson, D.; Autermann, C.; Feld, L.; Karpinski, W.; Kiesel, K. M.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Pierschel, G.; Preuten, M.; Rauch, M.; Schael, S.; Schomakers, C.; Schulz, J.; Schwering, G.; Wlochal, M.; Zhukov, V.; Pistone, C.; Fluegge, G.; Kuensken, A.; Pooth, O.; Stahl, A.; Aldaya, M.; Asawatangtrakuldee, C.; Beernaert, K.; Bertsche, D.; Contreras-Campana, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Gallo, E.; Garay Garcia, J.; Hansen, K.; Haranko, M.; Harb, A.; Hauk, J.; Keaveney, J.; Kalogeropoulos, A.; Kleinwort, C.; Lohmann, W.; Mankel, R.; Maser, H.; Mittag, G.; Muhl, C.; Mussgiller, A.; Pitzl, D.; Reichelt, O.; Savitskyi, M.; Schuetze, P.; Walsh, R.; Zuber, A.; Biskop, H.; Buhmann, P.; Centis-Vignali, M.; Garutti, E.; Haller, J.; Hoffmann, M.; Lapsien, T.; Matysek, M.; Perieanu, A.; Scharf, Ch.; Schleper, P.; Schmidt, A.; Schwandt, J.; Sonneveld, J.; Steinbrück, G.; Vormwald, B.; Wellhausen, J.; Abbas, M.; Amstutz, C.; Barvich, T.; Barth, Ch.; Boegelspacher, F.; De Boer, W.; Butz, E.; Caselle, M.; Colombo, F.; Dierlamm, A.; Freund, B.; Hartmann, F.; Heindl, S.; Husemann, U.; Kornmayer, A.; Kudella, S.; Muller, Th.; Simonis, H. J.; Steck, P.; Weber, M.; Weiler, Th.; Anagnostou, G.; Asenov, P.; Assiouras, P.; Daskalakis, G.; Kyriakis, A.; Loukas, D.; Paspalaki, L.; Siklér, F.; Veszprémi, V.; Bhardwaj, A.; Dalal, R.; Jain, G.; Ranjan, K.; Bakhshiansohl, H.; Behnamian, H.; Khakzad, M.; Naseri, M.; Cariola, P.; Creanza, D.; De Palma, M.; De Robertis, G.; Fiore, L.; Franco, M.; Loddo, F.; Silvestris, L.; Maggi, G.; Martiradonna, S.; My, S.; Selvaggi, G.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Saizu, M. A.; Tricomi, A.; Tuve, C.; Barbagli, G.; Brianzi, M.; Ciaranfi, R.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Latino, G.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Scarlini, E.; Sguazzoni, G.; Strom, D.; Viliani, L.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Pedrini, D.; Azzi, P.; Bacchetta, N.; Bisello, D.; Dall'Osso, M.; Pozzobon, N.; Tosi, M.; De Canio, F.; Gaioni, L.; Manghisoni, M.; Nodari, B.; Riceputi, E.; Re, V.; Traversi, G.; Comotti, D.; Ratti, L.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Checcucci, B.; Ciangottini, D.; Fanò, L.; Gentsos, C.; Ionica, M.; Leonardi, R.; Manoni, E.; Mantovani, G.; Marconi, S.; Mariani, V.; Menichelli, M.; Modak, A.; Morozzi, A.; Moscatelli, F.; Passeri, D.; Placidi, P.; Postolache, V.; Rossi, A.; Saha, A.; Santocchia, A.; Storchi, L.; Spiga, D.; Androsov, K.; Azzurri, P.; Arezzini, S.; Bagliesi, G.; Basti, A.; Boccali, T.; Borrello, L.; Bosi, F.; Castaldi, R.; Ciampa, A.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Magazzu, G.; Martini, L.; Mazzoni, E.; Messineo, A.; Moggi, A.; Morsani, F.; Palla, F.; Palmonari, F.; Raffaelli, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Bellan, R.; Costa, M.; Covarelli, R.; Da Rocha Rolo, M.; Demaria, N.; Rivetti, A.; Dellacasa, G.; Mazza, G.; Migliore, E.; Monteil, E.; Pacher, L.; Ravera, F.; Solano, A.; Fernandez, M.; Gomez, G.; Jaramillo Echeverria, R.; Moya, D.; Gonzalez Sanchez, F. J.; Vila, I.; Virto, A. L.; Abbaneo, D.; Ahmed, I.; Albert, E.; Auzinger, G.; Berruti, G.; Bianchi, G.; Blanchot, G.; Bonnaud, J.; Caratelli, A.; Ceresa, D.; Christiansen, J.; Cichy, K.; Daguin, J.; D'Auria, A.; Detraz, S.; Deyrail, D.; Dondelewski, O.; Faccio, F.; Frank, N.; Gadek, T.; Gill, K.; Honma, A.; Hugo, G.; Jara Casas, L. M.; Kaplon, J.; Kornmayer, A.; Kottelat, L.; Kovacs, M.; Krammer, M.; Lenoir, P.; Mannelli, M.; Marchioro, A.; Marconi, S.; Mersi, S.; Martina, S.; Michelis, S.; Moll, M.; Onnela, A.; Orfanelli, S.; Pavis, S.; Peisert, A.; Pernot, J.-F.; Petagna, P.; Petrucciani, G.; Postema, H.; Rose, P.; Tropea, P.; Troska, J.; Tsirou, A.; Vasey, F.; Vichoudis, P.; Verlaat, B.; Zwalinski, L.; Bachmair, F.; Becker, R.; di Calafiori, D.; Casal, B.; Berger, P.; Djambazov, L.; Donega, M.; Grab, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M.; Perozzi, L.; Roeser, U.; Starodumov, A.; Tavolaro, V.; Wallny, R.; Zhu, D.; Amsler, C.; Bösiger, K.; Caminada, L.; Canelli, F.; Chiochia, V.; de Cosa, A.; Galloni, C.; Hreus, T.; Kilminster, B.; Lange, C.; Maier, R.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Taroni, S.; Yang, Y.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Kaestli, H.-C.; Kotlinski, D.; Langenegger, U.; Meier, B.; Rohe, T.; Streuli, S.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Jacob, J.; Seif El Nasr-Storey, S.; Cole, J.; Hoad, C.; Hobson, P.; Morton, A.; Reid, I. D.; Auzinger, G.; Bainbridge, R.; Dauncey, P.; Hall, G.; James, T.; Magnan, A.-M.; Pesaresi, M.; Raymond, D. M.; Uchida, K.; Garabedian, A.; Heintz, U.; Narain, M.; Nelson, J.; Sagir, S.; Speer, T.; Swanson, J.; Tersegno, D.; Watson-Daniels, J.; Chertok, M.; Conway, J.; Conway, R.; Flores, C.; Lander, R.; Pellett, D.; Ricci-Tam, F.; Squires, M.; Thomson, J.; Yohay, R.; Burt, K.; Ellison, J.; Hanson, G.; Olmedo, M.; Si, W.; Yates, B. R.; Gerosa, R.; Sharma, V.; Vartak, A.; Yagil, A.; Zevi Della Porta, G.; Dutta, V.; Gouskos, L.; Incandela, J.; Kyre, S.; Mullin, S.; Patterson, A.; Qu, H.; White, D.; Dominguez, A.; Bartek, R.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Apresyan, A.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cheung, H. W. K.; Chramowicz, J.; Christian, D.; Cooper, W. E.; Deptuch, G.; Derylo, G.; Gingu, C.; Grünendahl, S.; Hasegawa, S.; Hoff, J.; Howell, J.; Hrycyk, M.; Jindariani, S.; Johnson, M.; Kahlid, F.; Lei, C. M.; Lipton, R.; Lopes De Sá, R.; Liu, T.; Los, S.; Matulik, M.; Merkel, P.; Nahn, S.; Prosser, A.; Rivera, R.; Schneider, B.; Sellberg, G.; Shenai, A.; Spiegel, L.; Tran, N.; Uplegger, L.; Voirin, E.; Berry, D. R.; Chen, X.; Ennesser, L.; Evdokimov, A.; Evdokimov, O.; Gerber, C. E.; Hofman, D. J.; Makauda, S.; Mills, C.; Sandoval Gonzalez, I. D.; Alimena, J.; Antonelli, L. J.; Francis, B.; Hart, A.; Hill, C. S.; Parashar, N.; Stupak, J.; Bortoletto, D.; Bubna, M.; Hinton, N.; Jones, M.; Miller, D. H.; Shi, X.; Tan, P.; Baringer, P.; Bean, A.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Wilson, G.; Ivanov, A.; Mendis, R.; Mitchell, T.; Skhirtladze, N.; Taylor, R.; Anderson, I.; Fehling, D.; Gritsan, A.; Maksimovic, P.; Martin, C.; Nash, K.; Osherson, M.; Swartz, M.; Xiao, M.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Monroy, J.; Siado, J.; Hahn, K.; Sevova, S.; Sung, K.; Trovato, M.; Bartz, E.; Gershtein, Y.; Halkiadakis, E.; Kyriacou, S.; Lath, A.; Nash, K.; Osherson, M.; Schnetzer, S.; Stone, R.; Walker, M.; Malik, S.; Norberg, S.; Ramirez Vargas, J. E.; Alyari, M.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kharchilava, A.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; McDermott, K.; Mirman, N.; Rinkevicius, A.; Ryd, A.; Salvati, E.; Skinnari, L.; Soffi, L.; Tao, Z.; Thom, J.; Tucker, J.; Zientek, M.; Akgün, B.; Ecklund, K. M.; Kilpatrick, M.; Nussbaum, T.; Zabel, J.; Betchart, B.; Covarelli, R.; Demina, R.; Hindrichs, O.; Petrillo, G.; Eusebi, R.; Osipenkov, I.; Perloff, A.; Ulmer, K. A.
2017-06-01
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. This paper describes the main measurement results and conclusions that motivated this decision.
NASA Astrophysics Data System (ADS)
Delle Fratte, C.; Kennedy, J. A.; Kluth, S.; Mazzaferro, L.
2015-12-01
In a grid computing infrastructure tasks such as continuous upgrades, services installations and software deployments are part of an admins daily work. In such an environment tools to help with the management, provisioning and monitoring of the deployed systems and services have become crucial. As experiments such as the LHC increase in scale, the computing infrastructure also becomes larger and more complex. Moreover, today's admins increasingly work within teams that share responsibilities and tasks. Such a scaled up situation requires tools that not only simplify the workload on administrators but also enable them to work seamlessly in teams. In this paper will be presented our experience from managing the Max Planck Institute Tier2 using Puppet and Gitolite in a cooperative way to help the system administrator in their daily work. In addition to describing the Puppet-Gitolite system, best practices and customizations will also be shown.
Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment
NASA Astrophysics Data System (ADS)
Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel
2014-02-01
Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.
Jenni, Peter
2012-02-28
For the past year, experiments at the Large Hadron Collider (LHC) have started exploring physics at the high-energy frontier. Thanks to the superb turn-on of the LHC, a rich harvest of initial physics results have already been obtained by the two general-purpose experiments A Toroidal LHC Apparatus (ATLAS) and the Compact Muon Solenoid (CMS), which are the subject of this report. The initial data have allowed a test, at the highest collision energies ever reached in a laboratory, of the Standard Model (SM) of elementary particles, and to make early searches Beyond the Standard Model (BSM). Significant results have already been obtained in the search for the Higgs boson, which would establish the postulated electro-weak symmetry breaking mechanism in the SM, as well as for BSM physics such as Supersymmetry (SUSY), heavy new particles, quark compositeness and others. The important, and successful, SM physics measurements are giving confidence that the experiments are in good shape for their journey into the uncharted territory of new physics anticipated at the LHC.
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Gutsche, O.
The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
Top Quark and Higgs Boson Physics at LHC-ATLAS
NASA Astrophysics Data System (ADS)
Tomoto, M.
2013-03-01
One of the main goal of the Large Hadron Collider (LHC) experiments at CERN in Switzerland is to aim to solve the "origin of the mass" by discovering the Higgs boson and understanding the interaction of the Higgs boson with the elementary particles. The ATLAS, which is one of the LHC experiments has taken about 5 fb-1 of physics quality data and published several results with regard to the "origin of the mass" since March 2010. This presentation focuses on the latest results of the heaviest elementary particle, namely, top quark physics and the Higgs boson searches from ATLAS.
LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN
NASA Astrophysics Data System (ADS)
Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor
2017-12-01
The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
NASA Astrophysics Data System (ADS)
Johnston, William; Ernst, M.; Dart, E.; Tierney, B.
2014-04-01
Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large-scale, widely distributed data analysis systems, and the experience of the LHC can be applied to other scientific disciplines. In particular, specific analogies to the SKA will be cited in the talk.
A experimental research program on chirality at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markert, Christina
Heavy-ion collisions provide a unique opportunity to investigate the fundamental laws of physics of the strong force. The extreme conditions created by the collisions within a finite volume are akin to the properties of the deconfined partonic state which existed very shortly after the Big Bang and just prior to visible matter formation in the Universe. In this state massless quarks and gluons (partons) are ``quasi free" particles, the so-called Quark Gluon Plasma (QGP). By following the expansion and cooling of this state, we will map out the process of nucleonic matter formation, which occurs during the phase transition. Themore » fundamental properties of this early partonic phase of matter are not well understood, but they are essential for confirming QCD (Quantum Chromo-Dynamics) and the Standard Model. The specific topic, chiral symmetry restoration, has been called ``the remaining puzzle of QCD.'' This puzzle can only be studied in the dense partonic medium generated in heavy-ion collisions. The research objectives of this proposal are the development and application of new analysis strategies to study chirality and the properties of the medium above the QGP phase transition using hadronic resonances detected with the ALICE experiment at the Large Hadron Collider (LHC) at the CERN research laboratory in Switzerland. This grant funded a new effort at the University of Texas at Austin (UT Austin) to investigate the Quark Gluon Plasma (QGP) at the highest possible energy of 2.76 TeV per nucleon at the Large Hadron Collider (LHC) at CERN via the ALICE experiment. The findings added to our knowledge of the dynamical evolution and the properties of the hot, dense matter produced in heavy-ion collisions, and provided a deeper understanding of multi-hadron interactions in these extreme nuclear matter systems. Our group contributed as well to the hardware and software for the ALICE USA-funded Calorimeter Detector (EMCal). The LHC research program and its connection to fundamental questions in high energy, nuclear and astrophysics has triggered the imagination of many young students worldwide. The studies also promoted the early involvement of students and young postdocs in a large, multi-national research effort abroad, which provided them with substantial experience and skills prior to choosing their career path. The undergraduate program, in conjunction with the Freshman Research Initiative at UT Austin, allowed the students to complete a research project within the field of Nuclear Physics.« less
Final Technical Report for ``Paths to Discovery at the LHC : Dark Matter and Track Triggering"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Kristian
Particle Dark Matter (DM) is perhaps the most compelling and experimentally well-motivated new physics scenario anticipated at the Large Hadron Collider (LHC). The DE-SC0014073 award allowed the PI to define and pursue a path to the discovery of Dark Matter in Run-2 of the LHC with the Compact Muon Solenoid (CMS) experiment. CMS can probe regions of Dark Matter phase-space that direct and indirect detection experiments are unable to constrain. The PI’s team initiated the exploration of these regions, searching specifically for the associated production of Dark Matter with top quarks. The effort focuses on the high-yield, hadronic decays ofmore » W bosons produced in top decay, which provides the highest sensitivity to DM produced via through low-mass spin-0 mediators. The group developed identification algorithms that achieve high efficiency and purity in the selection of hadronic top decays, and analysis techniques that provide powerful signal discrimination in Run-2. The ultimate reach of new physics searches with CMS will be established at the high-luminosity LHC (HL-LHC). To fully realize the sensitivity the HL-LHC promises, CMS must minimize the impact of soft, inelastic (“pileup”) interactions on the real-time “trigger” system the experiment uses for data refinement. Charged particle trajectory information (“tracking”) will be essential for pileup mitigation at the HL-LHC. The award allowed the PI’s team to develop firmware-based data delivery and track fitting algorithms for an unprecedented, real-time tracking trigger to sustain the experiment’s sensitivity to new physics in the next decade.« less
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
Adam, W.; Bergauer, T.; Brondolin, E.; ...
2017-06-27
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. Furthermore, this paper describes the main measurement results and conclusions thatmore » motivated this decision.« less
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, W.; Bergauer, T.; Brondolin, E.
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. Furthermore, this paper describes the main measurement results and conclusions thatmore » motivated this decision.« less
SUSY searches at the LHC with the ATLAS experiment
D' Onofrio, Monica
2017-12-18
First ATLAS searches for signals of Supersymmetry in proton-proton collisions at the LHC are presented. These searches are performed in various channels containing different lepton and jet multiplicities in the final states; the full data sample recorded in the 2010 LHC run, corresponding to an integrated luminosity of 35 pb-1, has been analysed. Limits on squarks and gluins are the most stringent to date.
Lansberg, J. P.; Anselmino, M.; Arnaldi, R.; ...
2016-11-19
Here we discuss the potential of AFTER@LHC to measure single-transverse-spin asymmetries in open-charm and bottomonium production. With a HERMES-like hydrogen polarised target, such measurements over a year can reach precisions close to the per cent level. This is particularly remarkable since these analyses can probably not be carried out anywhere else.
The ALICE data quality monitoring system
NASA Astrophysics Data System (ADS)
von Haller, B.; Telesca, A.; Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Denes, E.; Divià, R.; Fuchs, U.; Simonetti, G.; Soós, C.; Vande Vyvre, P.; ALICE Collaboration
2011-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) is a key element of the Data Acquisition's software chain. It provide shifters with precise and complete information to quickly identify and overcome problems, and as a consequence to ensure acquisition of high quality data. DQM typically involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper describes the final design of ALICE'S DQM framework called AMORE (Automatic MOnitoRing Environment), as well as its latest and coming features like the integration with the offline analysis and reconstruction framework, a better use of multi-core processors by a parallelization effort, and its interface with the eLogBook. The concurrent collection and analysis of data in an online environment requires the framework to be highly efficient, robust and scalable. We will describe what has been implemented to achieve these goals and the procedures we follow to ensure appropriate robustness and performance. We finally review the wide range of usages people make of this framework, from the basic monitoring of a single sub-detector to the most complex ones within the High Level Trigger farm or using the Prompt Reconstruction and we describe the various ways of accessing the monitoring results. We conclude with our experience, before and after the LHC startup, when monitoring the data quality in a challenging environment.
Introducing concurrency in the Gaudi data processing framework
NASA Astrophysics Data System (ADS)
Clemencic, Marco; Hegner, Benedikt; Mato, Pere; Piparo, Danilo
2014-06-01
In the past, the increasing demands for HEP processing resources could be fulfilled by the ever increasing clock-frequencies and by distributing the work to more and more physical machines. Limitations in power consumption of both CPUs and entire data centres are bringing an end to this era of easy scalability. To get the most CPU performance per watt, future hardware will be characterised by less and less memory per processor, as well as thinner, more specialized and more numerous cores per die, and rather heterogeneous resources. To fully exploit the potential of the many cores, HEP data processing frameworks need to allow for parallel execution of reconstruction or simulation algorithms on several events simultaneously. We describe our experience in introducing concurrency related capabilities into Gaudi, a generic data processing software framework, which is currently being used by several HEP experiments, including the ATLAS and LHCb experiments at the LHC. After a description of the concurrent framework and the most relevant design choices driving its development, we describe the behaviour of the framework in a more realistic environment, using a subset of the real LHCb reconstruction workflow, and present our strategy and the used tools to validate the physics outcome of the parallel framework against the results of the present, purely sequential LHCb software. We then summarize the measurement of the code performance of the multithreaded application in terms of memory and CPU usage.
The deployment of a large scale object store at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Johnson, I.; Adams, J.; Canning, B.; Vasilakakos, G.; Packer, A.
2017-10-01
Since 2014, the RAL Tier-1 has been working on deploying a Ceph backed object store. The aim is to replace Castor for disk-only storage. This new service must be scalable to meet the data demands of the LHC to 2020 and beyond. As well as offering access protocols the LHC experiments currently use, it must also provide industry standard access protocols. In order to keep costs down the service must use erasure coding rather than replication to ensure data reliability. This paper will present details of the storage service setup, which has been named Echo, as well as the experience gained from running it. The RAL Tier-1 has also been developing XrootD and GridFTP plugins for Ceph. Both plugins are built on top of the same libraries that write striped data into Ceph and therefore data written by one protocol will be accessible by the other. In the long term we hope the LHC experiments will migrate to industry standard protocols, therefore these plugins will only provide the features needed by the LHC experiments. This paper will report on the development and testing of these plugins.
The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity
NASA Astrophysics Data System (ADS)
Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo
2015-05-01
The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.
The CREAM-CE: First experiences, results and requirements of the four LHC experiments
NASA Astrophysics Data System (ADS)
Mendez Lorenzo, Patricia; Santinelli, Roberto; Sciaba, Andrea; Thackray, Nick; Shiers, Jamie; Renshall, Harry; Sgaravatto, Massimo; Padhi, Sanjay
2010-04-01
In terms of the gLite middleware, the current LCG-CE used by the four LHC experiments is about to be deprecated. The new CREAM-CE service (Computing Resource Execution And Management) has been approved to replace the previous service. CREAM-CE is a lightweight service created to handle job management operations at the CE level. It is able to accept requests both via the gLite WMS service and also via direct submission for transmission to the local batch system. This flexible duality provides the experiments with a large level of freedom to adapt the service to their own computing models, but at the same time it requires a careful follow up of the requirements and tests of the experiments to ensure that their needs are fulfilled before real data taking. In this paper we present the current testing results of the four LHC experiments concerning this new service. The operations procedures, which have been elaborated together with the experiment support teams will be discussed. Finally, the experiments requirements and the expectations for both the sites and the service itself are exposed in detail.
The ALICE Experiment at CERN Lhc:. Status and First Results
NASA Astrophysics Data System (ADS)
Vercellin, Ermanno
The ALICE experiment is aimed at studying the properties of the hot and dense matter produced in heavy-ion collisions at LHC energies. In the first years of LHC operation the ALICE physics program will be focused on Pb-Pb and p-p collisions. The latter, on top of their intrinsic interest, will provide the necessary baseline for heavy-ion data. After its installation and a long commissioning with cosmic rays, in late fall 2009 ALICE participated (very successfully) in the first LHC run, by collecting data in p-p collisions at c.m. energy 900 GeV. After a short stop during winter, LHC operations have been resumed; the machine is now able to accelerate proton beams up to 3.5 TeV and ALICE has undertaken the data taking campaign at 7 TeV c.m. energy. After an overview of the ALICE physics goals and a short description of the detector layout, the ALICE performance in p-p collisions will be presented. The main physics results achieved so far will be highlighted as well as the main aspects of the ongoing data analysis.
Lincoln, Don
2018-01-16
The Large Hadron Collider or LHC is the worldâs biggest particle accelerator, but it can only get particles moving very quickly. To make measurements, scientists must employ particle detectors. There are four big detectors at the LHC: ALICE, ATLAS, CMS, and LHCb. In this video, Fermilabâs Dr. Don Lincoln introduces us to these detectors and gives us an idea of each oneâs capabilities.
LHCb Build and Deployment Infrastructure for run 2
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.
2015-12-01
After the successful run 1 of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and Regression testing infrastructure. Some components are completely new, like the Software Configuration Database (using the Graph DB Neo4j), or the new packaging installation using RPM packages. Furthermore all those parts are integrated to allow easier and quicker releases of the LHCb Software stack, therefore reducing the risk of operational errors. Integration and Regression tests are also now easier to implement, allowing to improve further the software checks.
Development of a Next Generation Concurrent Framework for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Calafiura, P.; Lampl, W.; Leggett, C.; Malon, D.; Stewart, G.; Wynne, B.
2015-12-01
The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible. In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded / multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.
CVD diamond pixel detectors for LHC experiments
NASA Astrophysics Data System (ADS)
Wedenig, R.; Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Pan, L. S.; Palmieri, V. G.; Pernicka, M.; Peitz, A.; Pirollo, S.; Polesello, P.; Pretzl, K.; Procario, M.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Runolfsson, O.; Russ, J.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trawick, M.; Trischuk, W.; Vittone, E.; Wagner, A.; Walsh, A. M.; Weilhammer, P.; White, C.; Zeuner, W.; Ziock, H.; Zoeller, M.; Blanquart, L.; Breugnion, P.; Charles, E.; Ciocio, A.; Clemens, J. C.; Dao, K.; Einsweiler, K.; Fasching, D.; Fischer, P.; Joshi, A.; Keil, M.; Klasen, V.; Kleinfelder, S.; Laugier, D.; Meuser, S.; Milgrome, O.; Mouthuy, T.; Richardson, J.; Sinervo, P.; Treis, J.; Wermes, N.; RD42 Collaboration
1999-08-01
This paper reviews the development of CVD diamond pixel detectors. The preparation of the diamond pixel sensors for bump-bonding to the pixel readout electronics for the LHC and the results from beam tests carried out at CERN are described.
Status of the Space Radiation Monte Carlos Simulation Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Andersen, Victor; Carminati, Federico; Empl, Anton; Ferrari, Alfredo; Pinsky, Lawrence; Sala, Paola; Wilson, Thomas L.
2002-01-01
The NASA-funded project reported on at the first IWSSRR in Arona to develop a Monte-Carlo simulation program for use in simulating the space radiation environment based on the FLUKA and ROOT codes is well into its second year of development, and considerable progress has been made. The general tasks required to achieve the final goals include the addition of heavy-ion interactions into the FLUKA code and the provision of a ROOT-based interface to FLUKA. The most significant progress to date includes the incorporation of the DPMJET event generator code within FLUKA to handle heavy-ion interactions for incident projectile energies greater than 3GeV/A. The ongoing effort intends to extend the treatment of these interactions down to 10 MeV, and at present two alternative approaches are being explored. The ROOT interface is being pursued in conjunction with the CERN LHC ALICE software team through an adaptation of their existing AliROOT software. As a check on the validity of the code, a simulation of the recent data taken by the ATIC experiment is underway.
Monitoring the CMS strip tracker readout system
NASA Astrophysics Data System (ADS)
Mersi, S.; Bainbridge, R.; Baulieu, G.; Bel, S.; Cole, J.; Cripps, N.; Delaere, C.; Drouhin, F.; Fulcher, J.; Giassi, A.; Gross, L.; Hahn, K.; Mirabito, L.; Nikolic, M.; Tkaczyk, S.; Wingham, M.
2008-07-01
The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khachatryan, Vardan
This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, tau lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during datamore » taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.« less
NASA Astrophysics Data System (ADS)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Knünz, V.; König, A.; Krammer, M.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Heracleous, N.; Keaveney, J.; Lowette, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Van Parijs, I.; Barria, P.; Brun, H.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Fasanella, G.; Favart, L.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Maerschalk, T.; Marinov, A.; Perniè, L.; Randle-conde, A.; Reis, T.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Beernaert, K.; Benucci, L.; Cimmino, A.; Crucy, S.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; Mccartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva, S.; Sigamani, M.; Strobbe, N.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; Ceard, L.; Da Silveira, G. G.; Delaere, C.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Mertens, A.; Musich, M.; Nuttens, C.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Beliy, N.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Hamer, M.; Hensel, C.; Mora Herrera, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; Damiao, D. De Jesus; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; De Souza Santos, A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Micanovic, S.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Bodlak, M.; Finger, M.; Finger, M., Jr.; Assran, Y.; El Sawy, M.; Elgammal, S.; Ellithi Kamel, A.; Mahmoud, M. A.; Calpas, B.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Dahms, T.; Davignon, O.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Lisniak, S.; Mastrolorenzo, L.; Miné, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Merlin, J. A.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Bouvier, E.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Edelhoff, M.; Feld, L.; Heister, A.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Schael, S.; Schulte, J. F.; Verlage, T.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Künsken, A.; Lingemann, J.; Nehrkorn, A.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behnke, O.; Behrens, U.; Bell, A. J.; Borras, K.; Burgmeier, A.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Trippkewitz, K. D.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Gonzalez, D.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Nowatschin, D.; Ott, J.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Rathjens, D.; Sander, C.; Scharf, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schwandt, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; Colombo, F.; De Boer, W.; Descroix, A.; Dierlamm, A.; Fink, S.; Frensch, F.; Friese, R.; Giffels, M.; Gilbert, A.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Kornmayer, A.; Lobelle Pardo, P.; Maier, B.; Mildner, H.; Mozer, M. U.; Müller, T.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Sieber, G.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weber, M.; Weiler, T.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Bencze, G.; Hajdu, C.; Hazi, A.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Szillasi, Z.; Bartók, M.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Mal, P.; Mandal, K.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutta, S.; Jain, Sa.; Majumdar, N.; Modak, A.; Mondal, K.; Mukherjee, S.; Mukhopadhyay, S.; Roy, A.; Roy, D.; Chowdhury, S. Roy; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Mahakud, B.; Maity, M.; Majumder, G.; Mazumdar, K.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sarkar, T.; Sur, N.; Sutar, B.; Wickramage, N.; Chauhan, S.; Dube, S.; Kothekar, K.; Sharma, S.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Najafabadi, M. Mohammadi; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Cappello, G.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Lo Vetere, M.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Malvezzi, S.; Manzoni, R. A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Bacchetta, N.; Bellato, M.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dosselli, U.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Montecassiano, F.; Passaseo, M.; Pazzini, J.; Pegoraro, M.; Pozzobon, N.; Simonetto, F.; Torassa, E.; Tosi, M.; Vanini, S.; Ventura, S.; Zanetti, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; D'imperio, G.; Del Re, D.; Diemoz, M.; Gelli, S.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Zanetti, A.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Sakharov, A.; Son, D. C.; Brochero Cifuentes, J. A.; Kim, H.; Kim, T. J.; Song, S.; Choi, S.; Go, Y.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K.; Lee, K. S.; Lee, S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Ali, M. A. B. Md; Mohamad Idris, F.; Abdullah, W. A. T. Wan; Yusli, M. N.; Casimiro Linares, E.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Kierzkowski, K.; Konecki, M.; Krolikowski, J.; Misiura, M.; Oklinski, W.; Olszewski, M.; Pozniak, K.; Walczak, M.; Zabolotny, W.; Bargassa, P.; Silva, C. Beirão Da Cruz E.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Leonardo, N.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Baskakov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Kaminskiy, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Myagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Santaolalla, J.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Palencia Cortezon, E.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras De Saa, J. R.; De Castro Manzano, P.; Duarte Campderros, J.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Berruti, G. M.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Castello, R.; Cerminara, G.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Guio, F.; De Roeck, A.; De Visscher, S.; Di Marco, E.; Dobson, M.; Dordevic, M.; Dorney, B.; du Pree, T.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kirschenmann, H.; Kortelainen, M. J.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Magini, N.; Malgeri, L.; Mannelli, M.; Martelli, A.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Nemallapudi, M. V.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Piparo, D.; Racz, A.; Rolandi, G.; Rovere, M.; Ruan, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Simon, M.; Sphicas, P.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Triossi, A.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Eller, P.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Starodumov, A.; Takahashi, M.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; Chiochia, V.; De Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Ronga, F. J.; Salerno, D.; Yang, Y.; Cardaci, M.; Chen, K. H.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Yu, S. S.; Kumar, Arun; Bartek, R.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Fiori, F.; Grundler, U.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Petrakou, E.; Tsai, J. f.; Tzeng, Y. M.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Demiroglu, Z. S.; Dozen, C.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Onengut, G.; Ozdemir, K.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Zorbilmez, C.; Akin, I. V.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Vardarlı, F. I.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Senkin, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Cripps, N.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Dunne, P.; Elwood, A.; Ferguson, W.; Fulcher, J.; Futyan, D.; Hall, G.; Iles, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Arcaro, D.; Avetisyan, A.; Bose, T.; Fantasia, C.; Gastler, D.; Lawson, P.; Rankin, D.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Zou, D.; Alimena, J.; Berry, E.; Bhattacharya, S.; Cutts, D.; Dhingra, N.; Ferapontov, A.; Garabedian, A.; Hakala, J.; Heintz, U.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Syarif, R.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Saltzberg, D.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova PANEVA, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Olmedo Negrete, M.; Shrinivas, A.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Derdzinski, M.; Holzner, A.; Kelley, R.; Klein, D.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Flowers, K.; Sevilla, M. Franco; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Incandela, J.; Mccoll, N.; Mullin, S. D.; Richman, J.; Stuart, D.; Suarez, I.; West, C.; Yoo, J.; Anderson, D.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Pierini, M.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Nauenberg, U.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Kaufman, G. Nicolas; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Sun, W.; Tan, S. M.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Wittich, P.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Jung, A. W.; Klima, B.; Kreis, B.; Kwan, S.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Weber, H. A.; Whitbeck, A.; Yang, F.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Di Giovanni, G. P.; Field, R. D.; Furic, I. K.; Gleyzer, S. V.; Hugon, J.; Konigsberg, J.; Korytov, A.; Low, J. F.; Ma, P.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Rank, D.; Rossin, R.; Shchutska, L.; Snowball, M.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, J. R.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Khatiwada, A.; Prosper, H.; Weinberg, M.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Kalakhety, H.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Silkworth, C.; Turner, P.; Varelas, N.; Wu, Z.; Zakaria, M.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Roskes, J.; Sady, A.; Sarica, U.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Kenny, R. P., III; Majumder, D.; Malek, M.; Murray, M.; Sanders, S.; Stringer, R.; Wang, Q.; Ivanov, A.; Kaadze, K.; Khalil, S.; Makouski, M.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Lange, D.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Baty, A.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Zhukova, V.; Dahmes, B.; Evans, A.; Finkel, A.; Gude, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Meier, F.; Monroy, J.; Ratnikov, F.; Siado, J. E.; Snow, G. R.; Alyari, M.; Dolen, J.; George, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Zhang, J.; Hahn, K. A.; Kubik, A.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Trovato, M.; Velasco, M.; Brinkerhoff, A.; Dev, N.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Lynch, S.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Pearson, T.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Kotov, K.; Ling, T. Y.; Liu, B.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Palmer, C.; Piroué, P.; Saka, H.; Stickland, D.; Tully, C.; Zuranski, A.; Malik, S.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, K.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Sun, J.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Redjimi, R.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Harel, A.; Hindrichs, O.; Khukhunaishvili, A.; Petrillo, G.; Tan, P.; Verzetti, M.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Lath, A.; Nash, K.; Panwalkar, S.; Park, M.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Foerster, M.; Riley, G.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Castaneda Hernandez, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Kamon, T.; Krutelyov, V.; Mueller, R.; Osipenkov, I.; Pakhotin, Y.; Patel, R.; Perloff, A.; Rose, A.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Undleeb, S.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Ni, H.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Wood, J.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ruggles, T.; Sarangi, T.; Savin, A.; Sharma, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.
2017-01-01
This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, τ lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during data taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.
Khachatryan, Vardan
2017-01-24
This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, tau lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during datamore » taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.« less
NASA Astrophysics Data System (ADS)
Valentino, Gianluca; Baud, Guillaume; Bruce, Roderik; Gasior, Marek; Mereghetti, Alessio; Mirarchi, Daniele; Olexa, Jakub; Redaelli, Stefano; Salvachua, Belen; Valloni, Alessandra; Wenninger, Jorg
2017-08-01
During Long Shutdown 1, 18 Large Hadron Collider (LHC) collimators were replaced with a new design, in which beam position monitor (BPM) pick-up buttons are embedded in the collimator jaws. The BPMs provide a direct measurement of the beam orbit at the collimators, and therefore can be used to align the collimators more quickly than using the standard technique which relies on feedback from beam losses. Online orbit measurements also allow for reducing operational margins in the collimation hierarchy placed specifically to cater for unknown orbit drifts, therefore decreasing the β* and increasing the luminosity reach of the LHC. In this paper, the results from the commissioning of the embedded BPMs in the LHC are presented. The data acquisition and control software architectures are reviewed. A comparison with the standard alignment technique is provided, together with a fill-to-fill analysis of the measured orbit in different machine modes, which will also be used to determine suitable beam interlocks for a tighter collimation hierarchy.
Data acquisition and processing in the ATLAS tile calorimeter phase-II upgrade demonstrator
NASA Astrophysics Data System (ADS)
Valero, A.; Tile Calorimeter System, ATLAS
2017-10-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run 2 value. The ATLAS Tile Calorimeter will undergo an upgrade to accommodate the HL-LHC parameters. The TileCal readout electronics will be redesigned, introducing a new readout strategy. A Demonstrator program has been developed to evaluate the new proposed readout architecture and prototypes of all the components. In the Demonstrator, the detector data received in the Tile PreProcessors (PPr) are stored in pipeline buffers and upon the reception of an external trigger signal the data events are processed, packed and readout in parallel through the legacy ROD system, the new Front-End Link eXchange system and an ethernet connection for monitoring purposes. This contribution describes in detail the data processing and the hardware, firmware and software components of the TileCal Demonstrator readout system.
Evolution of the ATLAS distributed computing system during the LHC long shutdown
NASA Astrophysics Data System (ADS)
Campana, S.; Atlas Collaboration
2014-06-01
The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Box, D.; Boyd, J.; Di Benedetto, V.
2016-01-01
The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less
C P -violation in the two Higgs doublet model: From the LHC to EDMs
NASA Astrophysics Data System (ADS)
Chen, Chien-Yi; Li, Hao-Lin; Ramsey-Musolf, Michael
2018-01-01
We study the prospective sensitivity to C P -violating two Higgs doublet models from the 14 TeV LHC and future electric dipole moment (EDM) experiments. We concentrate on the search for a resonant heavy Higgs that decays to a Z boson and a SM-like Higgs h , leading to the Z (ℓℓ)h (b b ¯ ) final state. The prospective LHC reach is analyzed using the Boosted Decision Tree method. We illustrate the complementarity between the LHC and low energy EDM measurements and study the dependence of the physics reach on the degree of deviation from the alignment limit. In all cases, we find that there exists a large part of parameter space that is sensitive to both EDMs and LHC searches.
The ALICE experiment at the CERN LHC
NASA Astrophysics Data System (ADS)
ALICE Collaboration; Aamodt, K.; Abrahantes Quintana, A.; Achenbach, R.; Acounis, S.; Adamová, D.; Adler, C.; Aggarwal, M.; Agnese, F.; Aglieri Rinella, G.; Ahammed, Z.; Ahmad, A.; Ahmad, N.; Ahmad, S.; Akindinov, A.; Akishin, P.; Aleksandrov, D.; Alessandro, B.; Alfaro, R.; Alfarone, G.; Alici, A.; Alme, J.; Alt, T.; Altinpinar, S.; Amend, W.; Andrei, C.; Andres, Y.; Andronic, A.; Anelli, G.; Anfreville, M.; Angelov, V.; Anzo, A.; Anson, C.; Anticić, T.; Antonenko, V.; Antonczyk, D.; Antinori, F.; Antinori, S.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Aprodu, V.; Arba, M.; Arcelli, S.; Argentieri, A.; Armesto, N.; Arnaldi, R.; Arefiev, A.; Arsene, I.; Asryan, A.; Augustinus, A.; Awes, T. C.; Äysto, J.; Danish Azmi, M.; Bablock, S.; Badalà, A.; Badyal, S. K.; Baechler, J.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baldit, A.; Bán, J.; Barbera, R.; Barberis, P.-L.; Barbet, J. M.; Barnäfoldi, G.; Barret, V.; Bartke, J.; Bartos, D.; Basile, M.; Basmanov, V.; Bastid, N.; Batigne, G.; Batyunya, B.; Baudot, J.; Baumann, C.; Bearden, I.; Becker, B.; Belikov, J.; Bellwied, R.; Belmont-Moreno, E.; Belogianni, A.; Belyaev, S.; Benato, A.; Beney, J. L.; Benhabib, L.; Benotto, F.; Beolé, S.; Berceanu, I.; Bercuci, A.; Berdermann, E.; Berdnikov, Y.; Bernard, C.; Berny, R.; Berst, J. D.; Bertelsen, H.; Betev, L.; Bhasin, A.; Baskar, P.; Bhati, A.; Bianchi, N.; Bielčik, J.; Bielčiková, J.; Bimbot, L.; Blanchard, G.; Blanco, F.; Blanco, F.; Blau, D.; Blume, C.; Blyth, S.; Boccioli, M.; Bogdanov, A.; Bøggild, H.; Bogolyubsky, M.; Boldizsár, L.; Bombara, M.; Bombonati, C.; Bondila, M.; Bonnet, D.; Bonvicini, V.; Borel, H.; Borotto, F.; Borshchov, V.; Bortoli, Y.; Borysov, O.; Bose, S.; Bosisio, L.; Botje, M.; Böttger, S.; Bourdaud, G.; Bourrion, O.; Bouvier, S.; Braem, A.; Braun, M.; Braun-Munzinger, P.; Bravina, L.; Bregant, M.; Bruckner, G.; Brun, R.; Bruna, E.; Brunasso, O.; Bruno, G. E.; Bucher, D.; Budilov, V.; Budnikov, D.; Buesching, H.; Buncic, P.; Burns, M.; Burachas, S.; Busch, O.; Bushop, J.; Cai, X.; Caines, H.; Calaon, F.; Caldogno, M.; Cali, I.; Camerini, P.; Campagnolo, R.; Campbell, M.; Cao, X.; Capitani, G. P.; Romeo, G. Cara; Cardenas-Montes, M.; Carduner, H.; Carena, F.; Carena, W.; Cariola, P.; Carminati, F.; Casado, J.; Casanova Diaz, A.; Caselle, M.; Castillo Castellanos, J.; Castor, J.; Catanescu, V.; Cattaruzza, E.; Cavazza, D.; Cerello, P.; Ceresa, S.; Černý, V.; Chambert, V.; Chapeland, S.; Charpy, A.; Charrier, D.; Chartoire, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chepurnov, V.; Chernenko, S.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chochula, P.; Chiavassa, E.; Chibante Barroso, V.; Choi, J.; Christakoglou, P.; Christiansen, P.; Christensen, C.; Chykalov, O. A.; Cicalo, C.; Cifarelli-Strolin, L.; Ciobanu, M.; Cindolo, F.; Cirstoiu, C.; Clausse, O.; Cleymans, J.; Cobanoglu, O.; Coffin, J.-P.; Coli, S.; Colla, A.; Colledani, C.; Combaret, C.; Combet, M.; Comets, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Contin, G.; Contreras, J.; Cormier, T.; Corsi, F.; Cortese, P.; Costa, F.; Crescio, E.; Crochet, P.; Cuautle, E.; Cussonneau, J.; Dahlinger, M.; Dainese, A.; Dalsgaard, H. H.; Daniel, L.; Das, I.; Das, T.; Dash, A.; Da Silva, R.; Davenport, M.; Daues, H.; DeCaro, A.; de Cataldo, G.; DeCuveland, J.; DeFalco, A.; de Gaspari, M.; de Girolamo, P.; de Groot, J.; DeGruttola, D.; DeHaas, A.; DeMarco, N.; DePasquale, S.; DeRemigis, P.; de Vaux, D.; Decock, G.; Delagrange, H.; DelFranco, M.; Dellacasa, G.; Dell'Olio, C.; Dell'Olio, D.; Deloff, A.; Demanov, V.; Dénes, E.; D'Erasmo, G.; Derkach, D.; Devaux, A.; Di Bari, D.; Di Bartelomen, A.; Di Giglio, C.; Di Liberto, S.; Di Mauro, A.; Di Nezza, P.; Dialinas, M.; Diaz, L.; Díaz Valdes, R.; Dietel, T.; Dima, R.; Ding, H.; Dinca, C.; Divià, R.; Dobretsov, V.; Dobrin, A.; Doenigus, B.; Dobrowolski, T.; Domínguez, I.; Dorn, M.; Drouet, S.; Dubey, A. E.; Ducroux, L.; Dumitrache, F.; Dumonteil, E.; Dupieux, P.; Duta, V.; Dutta Majumdar, A.; Dutta Majumdar, M.; Dyhre, Th; Efimov, L.; Efremov, A.; Elia, D.; Emschermann, D.; Engster, C.; Enokizono, A.; Espagnon, B.; Estienne, M.; Evangelista, A.; Evans, D.; Evrard, S.; Fabjan, C. W.; Fabris, D.; Faivre, J.; Falchieri, D.; Fantoni, A.; Farano, R.; Fearick, R.; Fedorov, O.; Fekete, V.; Felea, D.; Feofilov, G.; Férnandez Téllez, A.; Ferretti, A.; Fichera, F.; Filchagin, S.; Filoni, E.; Finck, C.; Fini, R.; Fiore, E. M.; Flierl, D.; Floris, M.; Fodor, Z.; Foka, Y.; Fokin, S.; Force, P.; Formenti, F.; Fragiacomo, E.; Fragkiadakis, M.; Fraissard, D.; Franco, A.; Franco, M.; Frankenfeld, U.; Fratino, U.; Fresneau, S.; Frolov, A.; Fuchs, U.; Fujita, J.; Furget, C.; Furini, M.; Fusco Girard, M.; Gaardhøje, J.-J.; Gabrielli, A.; Gadrat, S.; Gagliardi, M.; Gago, A.; Gaido, L.; Gallas Torreira, A.; Gallio, M.; Gandolfi, E.; Ganoti, P.; Ganti, M.; Garabatos, J.; Garcia Lopez, A.; Garizzo, L.; Gaudichet, L.; Gemme, R.; Germain, M.; Gheata, A.; Gheata, M.; Ghidini, B.; Ghosh, P.; Giolu, G.; Giraudo, G.; Giubellino, P.; Glasow, R.; Glässel, P.; Ferreiro, E. G.; Gonzalez Gutierrez, C.; Gonzales-Trueba, L. H.; Gorbunov, S.; Gorbunov, Y.; Gos, H.; Gosset, J.; Gotovac, S.; Gottschlag, H.; Gottschalk, D.; Grabski, V.; Grassi, T.; Gray, H.; Grebenyuk, O.; Grebieszkow, K.; Gregory, C.; Grigoras, C.; Grion, N.; Grigoriev, V.; Grigoryan, A.; Grigoryan, C.; Grigoryan, S.; Grishuk, Y.; Gros, P.; Grosse-Oetringhaus, J.; Grossiord, J.-Y.; Grosso, R.; Grynyov, B.; Guarnaccia, C.; Guber, F.; Guerin, F.; Guernane, R.; Guerzoni, M.; Guichard, A.; Guida, M.; Guilloux, G.; Gulkanyan, H.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, V.; Gustafsson, H.-A.; Gutbrod, H.; Hadjidakis, C.; Haiduc, M.; Hamar, G.; Hamagaki, H.; Hamblen, J.; Hansen, J. C.; Hardy, P.; Hatzifotiadou, D.; Harris, J. W.; Hartig, M.; Harutyunyan, A.; Hayrapetyan, A.; Hasch, D.; Hasegan, D.; Hehner, J.; Heine, N.; Heinz, M.; Helstrup, H.; Herghelegiu, A.; Herlant, S.; Herrera Corral, G.; Herrmann, N.; Hetland, K.; Hille, P.; Hinke, H.; Hippolyte, B.; Hoch, M.; Hoebbel, H.; Hoedlmoser, H.; Horaguchi, T.; Horner, M.; Hristov, P.; Hřivnáčová, I.; Hu, S.; Guo, C. Hu; Humanic, T.; Hurtado, A.; Hwang, D. S.; Ianigro, J. C.; Idzik, M.; Igolkin, S.; Ilkaev, R.; Ilkiv, I.; Imhoff, M.; Innocenti, P. G.; Ionescu, E.; Ippolitov, M.; Irfan, M.; Insa, C.; Inuzuka, M.; Ivan, C.; Ivanov, A.; Ivanov, M.; Ivanov, V.; Jacobs, P.; Jacholkowski, A.; Jančurová, L.; Janik, R.; Jasper, M.; Jena, C.; Jirden, L.; Johnson, D. P.; Jones, G. T.; Jorgensen, C.; Jouve, F.; Jovanović, P.; Junique, A.; Jusko, A.; Jung, H.; Jung, W.; Kadija, K.; Kamal, A.; Kamermans, R.; Kapusta, S.; Kaidalov, A.; Kakoyan, V.; Kalcher, S.; Kang, E.; Kapitan, J.; Kaplin, V.; Karadzhev, K.; Karavichev, O.; Karavicheva, T.; Karpechev, E.; Karpio, K.; Kazantsev, A.; Kebschull, U.; Keidel, R.; Mohsin Khan, M.; Khanzadeev, A.; Kharlov, Y.; Kikola, D.; Kileng, B.; Kim, D.; Kim, D. S.; Kim, D. W.; Kim, H. N.; Kim, J. S.; Kim, S.; Kinson, J. B.; Kiprich, S. K.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, T.; Kiworra, V.; Klay, J.; Klein Bösing, C.; Kliemant, M.; Klimov, A.; Klovning, A.; Kluge, A.; Kluit, R.; Kniege, S.; Kolevatov, R.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kornas, E.; Koshurnikov, E.; Kotov, I.; Kour, R.; Kowalski, M.; Kox, S.; Kozlov, K.; Králik, I.; Kramer, F.; Kraus, I.; Kravčáková, A.; Krawutschke, T.; Krivda, M.; Kryshen, E.; Kucheriaev, Y.; Kugler, A.; Kuhn, C.; Kuijer, P.; Kumar, L.; Kumar, N.; Kumpumaeki, P.; Kurepin, A.; Kurepin, A. N.; Kushpil, S.; Kushpil, V.; Kutovsky, M.; Kvaerno, H.; Kweon, M.; Labbé, J.-C.; Lackner, F.; Ladron de Guevara, P.; Lafage, V.; La Rocca, P.; Lamont, M.; Lara, C.; Larsen, D. T.; Laurenti, G.; Lazzeroni, C.; LeBornec, Y.; LeBris, N.; LeGailliard, C.; Lebedev, V.; Lecoq, J.; Lee, K. S.; Lee, S. C.; Lefévre, F.; Legrand, I.; Lehmann, T.; Leistam, L.; Lenoir, P.; Lenti, V.; Leon, H.; Monzon, I. Leon; Lévai, P.; Li, Q.; Li, X.; Librizzi, F.; Lietava, R.; Lindegaard, N.; Lindenstruth, V.; Lippmann, C.; Lisa, M.; Listratenko, O. M.; Littel, F.; Liu, Y.; Lo, J.; Lobanov, V.; Loginov, V.; López Noriega, M.; López-Ramírez, R.; López Torres, E.; Lorenzo, P. M.; Løvhøiden, G.; Lu, S.; Ludolphs, W.; Lunardon, M.; Luquin, L.; Lusso, S.; Lutz, J.-R.; Luvisetto, M.; Lyapin, V.; Maevskaya, A.; Magureanu, C.; Mahajan, A.; Majahan, S.; Mahmoud, T.; Mairani, A.; Mahapatra, D.; Makarov, A.; Makhlyueva, I.; Malek, M.; Malkiewicz, T.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manea, C.; Mangotra, L. K.; Maniero, D.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marcel, A.; Marchini, S.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Marin, A.; Marin, J.-C.; Marras, D.; Martinengo, P.; Martínez, M. I.; Martinez-Davalos, A.; Martínez Garcia, G.; Martini, S.; Marzari Chiesa, A.; Marzocca, C.; Masciocchi, S.; Masera, M.; Masetti, M.; Maslov, N. I.; Masoni, A.; Massera, F.; Mast, M.; Mastroserio, A.; Matthews, Z. L.; Mayer, B.; Mazza, G.; Mazzaro, M. D.; Mazzoni, A.; Meddi, F.; Meleshko, E.; Menchaca-Rocha, A.; Meneghini, S.; Meoni, M.; Mercado Perez, J.; Mereu, P.; Meunier, O.; Miake, Y.; Michalon, A.; Michinelli, R.; Miftakhov, N.; Mignone, M.; Mikhailov, K.; Milosevic, J.; Minaev, Y.; Minafra, F.; Mischke, A.; Miśkowiec, D.; Mitsyn, V.; Mitu, C.; Mohanty, B.; Moisa, D.; Molnar, L.; Mondal, M.; Mondal, N.; Montaño Zetina, L.; Monteno, M.; Morando, M.; Morel, M.; Moretto, S.; Morhardt, Th; Morsch, A.; Moukhanova, T.; Mucchi, M.; Muccifora, V.; Mudnic, E.; Müller, H.; Müller, W.; Munoz, J.; Mura, D.; Musa, L.; Muraz, J. F.; Musso, A.; Nania, R.; Nandi, B.; Nappi, E.; Navach, F.; Navin, S.; Nayak, T.; Nazarenko, S.; Nazarov, G.; Nellen, L.; Nendaz, F.; Nianine, A.; Nicassio, M.; Nielsen, B. S.; Nikolaev, S.; Nikolic, V.; Nikulin, S.; Nikulin, V.; Nilsen, B.; Nitti, M.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noto, F.; Nouais, D.; Nyiri, A.; Nystrand, J.; Odyniec, G.; Oeschler, H.; Oinonen, M.; Oldenburg, M.; Oleks, I.; Olsen, E. K.; Onuchin, V.; Oppedisano, C.; Orsini, F.; Ortiz-Velázquez, A.; Oskamp, C.; Oskarsson, A.; Osmic, F.; Österman, L.; Otterlund, I.; Ovrebekk, G.; Oyama, K.; Pachr, M.; Pagano, P.; Paić, G.; Pajares, C.; Pal, S.; Pal, S.; Pálla, G.; Palmeri, A.; Pancaldi, G.; Panse, R.; Pantaleo, A.; Pappalardo, G. S.; Pastirčák, B.; Pastore, C.; Patarakin, O.; Paticchio, V.; Patimo, G.; Pavlinov, A.; Pawlak, T.; Peitzmann, T.; Pénichot, Y.; Pepato, A.; Pereira, H.; Peresunko, D.; Perez, C.; Perez Griffo, J.; Perini, D.; Perrino, D.; Peryt, W.; Pesci, A.; Peskov, V.; Pestov, Y.; Peters, A. J.; Petráček, V.; Petridis, A.; Petris, M.; Petrov, V.; Petrov, V.; Petrovici, M.; Peyré, J.; Piano, S.; Piccotti, A.; Pichot, P.; Piemonte, C.; Pikna, M.; Pilastrini, R.; Pillot, P.; Pinazza, O.; Pini, B.; Pinsky, L.; Pinto Morais, V.; Pismennaya, V.; Piuz, F.; Platt, R.; Ploskon, M.; Plumeri, S.; Pluta, J.; Pocheptsov, T.; Podesta, P.; Poggio, F.; Poghosyan, M.; Poghosyan, T.; Polák, K.; Polichtchouk, B.; Polozov, P.; Polyakov, V.; Pommeresch, B.; Pompei, F.; Pop, A.; Popescu, S.; Posa, F.; Pospíšil, V.; Potukuchi, B.; Pouthas, J.; Prasad, S.; Preghenella, R.; Prino, F.; Prodan, L.; Prono, G.; Protsenko, M. A.; Pruneau, C. A.; Przybyla, A.; Pshenichnov, I.; Puddu, G.; Pujahari, P.; Pulvirenti, A.; Punin, A.; Punin, V.; Putschke, J.; Quartieri, J.; Quercigh, E.; Rachevskaya, I.; Rachevski, A.; Rademakers, A.; Radomski, S.; Radu, A.; Rak, J.; Ramello, L.; Raniwala, R.; Raniwala, S.; Rasmussen, O. B.; Rasson, J.; Razin, V.; Read, K.; Real, J.; Redlich, K.; Reichling, C.; Renard, C.; Renault, G.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Revol, J.-P.; Reygers, K.; Ricaud, H.; Riccati, L.; Ricci, R. A.; Richter, M.; Riedler, P.; Rigalleau, L. M.; Riggi, F.; Riegler, W.; Rindel, E.; Riso, J.; Rivetti, A.; Rizzi, M.; Rizzi, V.; Rodriguez Cahuantzi, M.; Røed, K.; Röhrich, D.; Román-López, S.; Romanato, M.; Romita, R.; Ronchetti, F.; Rosinsky, P.; Rosnet, P.; Rossegger, S.; Rossi, A.; Rostchin, V.; Rotondo, F.; Roukoutakis, F.; Rousseau, S.; Roy, C.; Roy, D.; Roy, P.; Royer, L.; Rubin, G.; Rubio, A.; Rui, R.; Rusanov, I.; Russo, G.; Ruuskanen, V.; Ryabinkin, E.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahoo, R.; Saini, J.; Saiz, P.; Salur, S.; Sambyal, S.; Samsonov, V.; Šándor, L.; Sandoval, A.; Sann, H.; Santiard, J.-C.; Santo, R.; Santoro, R.; Sargsyan, G.; Saturnini, P.; Scapparone, E.; Scarlassara, F.; Schackert, B.; Schiaua, C.; Schicker, R.; Schioler, T.; Schippers, J. D.; Schmidt, C.; Schmidt, H.; Schneider, R.; Schossmaier, K.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Schyns, E.; Scioli, G.; Scomparin, E.; Snow, H.; Sedykh, S.; Segato, G.; Sellitto, S.; Semeria, F.; Senyukov, S.; Seppänen, H.; Serci, S.; Serkin, L.; Serra, S.; Sesselmann, T.; Sevcenco, A.; Sgura, I.; Shabratova, G.; Shahoyan, R.; Sharkov, E.; Sharma, S.; Shigaki, K.; Shileev, K.; Shukla, P.; Shurygin, A.; Shurygina, M.; Sibiriak, Y.; Siddi, E.; Siemiarczuk, T.; Sigward, M. H.; Silenzi, A.; Silvermyr, D.; Silvestri, R.; Simili, E.; Simion, V.; Simon, R.; Simonetti, L.; Singaraju, R.; Singhal, V.; Sinha, B.; Sinha, T.; Siska, M.; Sitár, B.; Sitta, M.; Skaali, B.; Skowronski, P.; Slodkowski, M.; Smirnov, N.; Smykov, L.; Snellings, R.; Snoeys, W.; Soegaard, C.; Soerensen, J.; Sokolov, O.; Soldatov, A.; Soloviev, A.; Soltveit, H.; Soltz, R.; Sommer, W.; Soos, C.; Soramel, F.; Sorensen, S.; Soyk, D.; Spyropoulou-Stassinaki, M.; Stachel, J.; Staley, F.; Stan, I.; Stavinskiy, A.; Steckert, J.; Stefanini, G.; Stefanek, G.; Steinbeck, T.; Stelzer, H.; Stenlund, E.; Stocco, D.; Stockmeier, M.; Stoicea, G.; Stolpovsky, P.; Strmeň, P.; Stutzmann, J. S.; Su, G.; Sugitate, T.; Šumbera, M.; Suire, C.; Susa, T.; Sushil Kumar, K.; Swoboda, D.; Symons, J.; Szarka, I.; Szostak, A.; Szuba, M.; Szymanski, P.; Tadel, M.; Tagridis, C.; Tan, L.; Tapia Takaki, D.; Taureg, H.; Tauro, A.; Tavlet, M.; Tejeda Munoz, G.; Thäder, J.; Tieulent, R.; Timmer, P.; Tolyhy, T.; Topilskaya, N.; Torcato de Matos, C.; Torii, H.; Toscano, L.; Tosello, F.; Tournaire, A.; Traczyk, T.; Tröger, G.; Tromeur, W.; Truesdale, D.; Trzaska, W.; Tsiledakis, G.; Tsilis, E.; Tsvetkov, A.; Turcato, M.; Turrisi, R.; Tuveri, M.; Tveter, T.; Tydesjo, H.; Tykarski, L.; Tywoniuk, K.; Ugolini, E.; Ullaland, K.; Urbán, J.; Urciuoli, G. M.; Usai, G. L.; Usseglio, M.; Vacchi, A.; Vala, M.; Valiev, F.; Vande Vyvre, P.; Van Den Brink, A.; Van Eijndhoven, N.; Van Der Kolk, N.; van Leeuwen, M.; Vannucci, L.; Vanzetto, S.; Vanuxem, J.-P.; Vargas, M. A.; Varma, R.; Vascotto, A.; Vasiliev, A.; Vassiliou, M.; Vasta, P.; Vechernin, V.; Venaruzzo, M.; Vercellin, E.; Vergara, S.; Verhoeven, W.; Veronese, F.; Vetlitskiy, I.; Vernet, R.; Victorov, V.; Vidak, L.; Viesti, G.; Vikhlyantsev, O.; Vilakazi, Z.; Villalobos Baillie, O.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Viyogi, Y.; Vodopianov, A.; Volpe, G.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Wabnitz, C.; Wagner, V.; Wallet, L.; Wan, R.; Wang, Y.; Wang, Y.; Wheadon, R.; Weis, R.; Wen, Q.; Wessels, J.; Westergaard, J.; Wiechula, J.; Wiesenaecker, A.; Wikne, J.; Wilk, A.; Wilk, G.; Williams, C.; Willis, N.; Windelband, B.; Witt, R.; Woehri, H.; Wyllie, K.; Xu, C.; Yang, C.; Yang, H.; Yermia, F.; Yin, Z.; Yin, Z.; Ky, B. Yun; Yushmanov, I.; Yuting, B.; Zabrodin, E.; Zagato, S.; Zagreev, B.; Zaharia, P.; Zalite, A.; Zampa, G.; Zampolli, C.; Zanevskiy, Y.; Zarochentsev, A.; Zaudtke, O.; Závada, P.; Zbroszczyk, H.; Zepeda, A.; Zeter, V.; Zgura, I.; Zhalov, M.; Zhou, D.; Zhou, S.; Zhu, G.; Zichichi, A.; Zinchenko, A.; Zinovjev, G.; Zoccarato, Y.; Zubarev, A.; Zucchini, A.; Zuffa, M.
2008-08-01
ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 16 × 16 × 26 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008.
Issues Using the Life History Calendar in Disability Research
Scott, Tiffany N.; Harrison, Tracie
2011-01-01
Background Overall, there is a dearth of research reporting mixed-method data collection procedures using the LHC within disability research. Objective This report provides practical knowledge on use of the life history calendar (LHC) from the perspective of a mixed-method life history study of mobility impairment situated within a qualitative paradigm. Methods In this paper the method related literature referring to the LHC was reviewed along with its epistemological underpinnings. Further, the uses of the LHC in disability research were illustrated using preliminary data from reports of disablement in Mexican American and Non-Hispanic White women with permanent mobility impairment. Results From our perspective, the LHC was most useful when approached from an interpretive paradigm when gathering data from women of varied ethnic and socioeconomic strata. While we found the LHC the most useful tool currently available for studying disablement over the life course, there were challenges associated with its use. The LHC required extensive interviewer training. In addition, large segments of time were needed for completion depending on the type of participant responses. Conclusions Researchers planning to conduct a disability study may find our experience using the LHC valuable for anticipating issues that may arise when the LHC is used in mixed-method research. PMID:22014674
The TOTEM DAQ based on the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio
2018-02-01
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Adriana; et al.
Long-range beam-beam (LRBB) interactions can be a source of emittance growth and beam losses in the LHC during physics and will become even more relevant with the smaller '* and higher bunch intensities foreseen for the High Luminosity LHC upgrade (HL-LHC), in particular if operated without crab cavities. Both beam losses and emittance growth could be mitigated by compensat-ing the non-linear LRBB kick with a correctly placed current carrying wire. Such a compensation scheme is currently being studied in the LHC through a demonstration test using current-bearing wires embedded into col-limator jaws, installed either side of the high luminosity interactionmore » regions. For HL-LHC two options are considered, a current-bearing wire as for the demonstrator, or electron lenses, as the ideal distance between the particle beam and compensating current may be too small to allow the use of solid materials. This paper reports on the ongoing activities for both options, covering the progress of the wire-in-jaw collimators, the foreseen LRBB experiments at the LHC, and first considerations for the design of the electron lenses to ultimately replace material wires for HL-LHC.« less
The QuarkNet CMS masterclass: bringing the LHC to students
NASA Astrophysics Data System (ADS)
Cecire, Kenneth; McCauley, Thomas
2016-04-01
QuarkNet is an educational program which brings high school teachers and their students into the particle physics research community. The program supports research experiences and professional development workshops and provides inquiry-oriented investigations, some using real experimental data. The CMS experiment at the LHC has released several thousand proton-proton collision events for use in education and outreach. QuarkNet, in collaboration with CMS, has developed a physics masterclass and e-Lab based on this data. A masterclass is a day-long educational workshop where high school students travel to nearby universities and research laboratories. There they learn from LHC physicists about the basics of particle physics and detectors. They then perform a simple measurement using LHC data, and share their results with other students around the world via videoconference. Since 2011 thousands of students from over 25 countries have participated in the CMS masterclass as organized by QuarkNet and the International Particle Physics Outreach Group (IPPOG).We describe here the masterclass exercise: the physics, the online event display and database preparation behind it, the measurement the students undertake, their results and experiences, and future plans for the exercise.
Experiments and Cycling at the LHC Prototype Half-Cell
NASA Astrophysics Data System (ADS)
Saban, R.; Casas-Cubillos, J.; Coull, L.; Cruikshank, P.; Dahlerup-Petersen, K.; Hilbert, B.; Krainz, G.; Kos, N.; Lebrun, P.; Momal, F.; Misiaen, D.; Parma, V.; Poncet, A.; Riddone, G.; Rijllart, A.; Rodriguez-Mateos, F.; Schmidt, R.; Serio, L.; Wallen, E.; van Weelderen, R.; Williams, L. R.
1997-05-01
The first version of the LHC prototype half-cell has been in operation since February 1995. It consists of one quadrupole and three 10-m twin aperture dipole magnets which operate at 1.8 K. This experimental set-up has been used to observe and study phenomena which appear when the systems are assembled in one unit and influence one another. The 18-month long experimental program has validated the cryogenic system and yielded a number of results on cryogenic instrumentation, magnet protection and vacuum in particular under non-standard operating conditions. The program was recently complemented by the cycling experiment: it consisted in powering the magnets following the ramp rates which will be experienced by the magnets during an LHC injection. In order to simulate 10 years of routine operation of LHC, more than 2000 1-hour cycles were performed interleaved with provoked quenches. The objective of this experiment was to reveal eventual flaws in the design of components. The prototype half-cell performed to expectations showing no sign of failure of fatigue of components for more than 2000 cycles until one of the dipoles started exhibiting an erratic quench behavior.
W.K.H. Panofsky Prize: The Long Journey to the Higgs Boson: CMS
NASA Astrophysics Data System (ADS)
Virdee, Tejinder
2017-01-01
There has been a rich harvest of physics from the experiments at the Large Hadron Collider (LHC). In July 2012, the ground-breaking discovery of the Higgs boson was made by the ATLAS and CMS experiments. This boson is a long-sought particle expected from the mechanism for spontaneous symmetry breaking in the electro-weak sector that provides an explanation of how elementary particles acquire mass. The discovery required experiments of unprecedented capability and complexity. This talk, complementing that of Peter Jenni, will trace the background to the search for the Higgs boson at the LHC, the conception, the construction and the operation of the CMS experiment, and its subsequent discovery of the boson. The SM is considered to be a low energy manifestation of a more complete theory - physics beyond the SM is therefore widely anticipated. Selected CMS results will be presented from the search for physics beyond the SM from the 13 TeV Run-2 at the LHC.
Processing of the WLCG monitoring data using NoSQL
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
The MoEDAL Experiment at the LHC
NASA Astrophysics Data System (ADS)
Pinfold, James L.
2014-04-01
In 2010 the CERN (European Centre for Particle Physics Research) Research Board unanimously approved MoEDAL, the 7th international experiment at the Large Hadron Collider (LHC), which is designed to search for avatars of new physics signified by highly ionizing particles. The MoEDAL detector is like a giant camera ready to reveal "photographic" evidence for new physics and also to actually trap long-lived new particles for further study. The MoEDAL experiment will significantly expand the horizon for discovery at the LHC, in a complementary way. A MoEDAL discovery would have revolutionary implications for our understanding of the microcosm, providing insights into such fundamental questions as: do magnetic monopoles exist, are there extra dimensions or new symmetries of nature; what is the mechanism for the generation of mass; what is the nature of dark matter and how did the big-bang unfurl at the earliest times.
Gaudi Evolution for Future Challenges
NASA Astrophysics Data System (ADS)
Clemencic, M.; Hegner, B.; Leggett, C.
2017-10-01
The LHCb Software Framework Gaudi was initially designed and developed almost twenty years ago, when computing was very different from today. It has also been used by a variety of other experiments, including ATLAS, Daya Bay, GLAST, HARP, LZ, and MINERVA. Although it has been always actively developed all these years, stability and backward compatibility have been favoured, reducing the possibilities of adopting new techniques, like multithreaded processing. R&D efforts like GaudiHive have however shown its potential to cope with the new challenges. In view of the LHC second Long Shutdown approaching and to prepare for the computing challenges for the Upgrade of the collider and the detectors, now is a perfect moment to review the design of Gaudi and plan future developments of the project. To do this LHCb, ATLAS and the Future Circular Collider community joined efforts to bring Gaudi forward and prepare it for the upcoming needs of the experiments. We present here how Gaudi will evolve in the next years and the long term development plans.
BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.
2007-06-25
The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.
Fikowski, Jill; Marchand, Kirsten; Palis, Heather; Oviedo-Joekes, Eugenia
2014-01-01
Uncovering patterns of drug use and treatment access is essential to improving treatment for opioid dependence. The life history calendar (LHC) could be a valuable instrument for capturing time-sensitive data on lifetime patterns of drug use and addiction treatment. This study describes the methodology applied when collecting data using the LHC in a sample of individuals with long-term opioid dependence and aims to identify specific factors that impact the feasibility of administering the LHC interview. In this study, the LHC allowed important events such as births, intimate relationships, housing, or incarcerations to become reference points for recalling details surrounding drug use and treatment access. The paper concludes that the administration of the LHC was a resource-intensive process and required special attention to interviewer training and experience with the study population. These factors should be considered and integrated into study plans by researchers using the LHC in addiction research.
NASA Astrophysics Data System (ADS)
LHCb Collaboration; Alves, A. Augusto, Jr.; Filho, L. M. Andrade; Barbosa, A. F.; Bediaga, I.; Cernicchiaro, G.; Guerrer, G.; Lima, H. P., Jr.; Machado, A. A.; Magnin, J.; Marujo, F.; de Miranda, J. M.; Reis, A.; Santos, A.; Toledo, A.; Akiba, K.; Amato, S.; de Paula, B.; de Paula, L.; da Silva, T.; Gandelman, M.; Lopes, J. H.; Maréchal, B.; Moraes, D.; Polycarpo, E.; Rodrigues, F.; Ballansat, J.; Bastian, Y.; Boget, D.; DeBonis, I.; Coco, V.; David, P. Y.; Decamp, D.; Delebecque, P.; Drancourt, C.; Dumont-Dayot, N.; Girard, C.; Lieunard, B.; Minard, M. N.; Pietrzyk, B.; Rambure, T.; Rospabe, G.; T'Jampens, S.; Ajaltouni, Z.; Bohner, G.; Bonnefoy, R.; Borras, D.; Carloganu, C.; Chanal, H.; Conte, E.; Cornat, R.; Crouau, M.; Delage, E.; Deschamps, O.; Henrard, P.; Jacquet, P.; Lacan, C.; Laubser, J.; Lecoq, J.; Lefèvre, R.; Magne, M.; Martemiyanov, M.; Mercier, M.-L.; Monteil, S.; Niess, V.; Perret, P.; Reinmuth, G.; Robert, A.; Suchorski, S.; Arnaud, K.; Aslanides, E.; Babel, J.; Benchouk, C.; Cachemiche, J.-P.; Cogan, J.; Derue, F.; Dinkespiler, B.; Duval, P.-Y.; Garonne, V.; Favard, S.; LeGac, R.; Leon, F.; Leroy, O.; Liotard, P.-L.; Marin, F.; Menouni, M.; Ollive, P.; Poss, S.; Roche, A.; Sapunov, M.; Tocco, L.; Viaud, B.; Tsaregorodtsev, A.; Amhis, Y.; Barrand, G.; Barsuk, S.; Beigbeder, C.; Beneyton, R.; Breton, D.; Callot, O.; Charlet, D.; D'Almagne, B.; Duarte, O.; Fulda-Quenzer, F.; Jacholkowska, A.; Jean-Marie, B.; Lefrancois, J.; Machefert, F.; Robbe, P.; Schune, M.-H.; Tocut, V.; Videau, I.; Benayoun, M.; David, P.; DelBuono, L.; Gilles, G.; Domke, M.; Futterschneider, H.; Ilgner, Ch; Kapusta, P.; Kolander, M.; Krause, R.; Lieng, M.; Nedos, M.; Rudloff, K.; Schleich, S.; Schwierz, R.; Spaan, B.; Wacker, K.; Warda, K.; Agari, M.; Bauer, C.; Baumeister, D.; Bulian, N.; Fuchs, H. P.; Fallot-Burghardt, W.; Glebe, T.; Hofmann, W.; Knöpfle, K. T.; Löchner, S.; Ludwig, A.; Maciuc, F.; Sanchez Nieto, F.; Schmelling, M.; Schwingenheuer, B.; Sexauer, E.; Smale, N. J.; Trunk, U.; Voss, H.; Albrecht, J.; Bachmann, S.; Blouw, J.; Deissenroth, M.; Deppe, H.; Dreis, H. B.; Eisele, F.; Haas, T.; Hansmann-Menzemer, S.; Hennenberger, S.; Knopf, J.; Moch, M.; Perieanu, A.; Rabenecker, S.; Rausch, A.; Rummel, C.; Rusnyak, R.; Schiller, M.; Stange, U.; Uwer, U.; Walter, M.; Ziegler, R.; Avoni, G.; Balbi, G.; Bonifazi, F.; Bortolotti, D.; Carbone, A.; D'Antone, I.; Galli, D.; Gregori, D.; Lax, I.; Marconi, U.; Peco, G.; Vagnoni, V.; Valenti, G.; Vecchi, S.; Bonivento, W.; Cardini, A.; Cadeddu, S.; DeLeo, V.; Deplano, C.; Furcas, S.; Lai, A.; Oldeman, R.; Raspino, D.; Saitta, B.; Serra, N.; Baldini, W.; Brusa, S.; Chiozzi, S.; Cotta Ramusino, A.; Evangelisti, F.; Franconieri, A.; Germani, S.; Gianoli, A.; Guoming, L.; Landi, L.; Malaguti, R.; Padoan, C.; Pennini, C.; Savriè, M.; Squerzanti, S.; Zhao, T.; Zhu, M.; Bizzeti, A.; Graziani, G.; Lenti, M.; Lenzi, M.; Maletta, F.; Pennazzi, S.; Passaleva, G.; Veltri, M.; Alfonsi, M.; Anelli, M.; Balla, A.; Battisti, A.; Bencivenni, G.; Campana, P.; Carletti, M.; Ciambrone, P.; Corradi, G.; Dané, E.; Di Virgilio, A.; DeSimone, P.; Felici, G.; Forti, C.; Gatta, M.; Lanfranchi, G.; Murtas, F.; Pistilli, M.; Poli Lener, M.; Rosellini, R.; Santoni, M.; Saputi, A.; Sarti, A.; Sciubba, A.; Zossi, A.; Ameri, M.; Cuneo, S.; Fontanelli, F.; Gracco, V.; Miní, G.; Parodi, M.; Petrolini, A.; Sannino, M.; Vinci, A.; Alemi, M.; Arnaboldi, C.; Bellunato, T.; Calvi, M.; Chignoli, F.; DeLucia, A.; Galotta, G.; Mazza, R.; Matteuzzi, C.; Musy, M.; Negri, P.; Perego, D.; Pessina, G.; Auriemma, G.; Bocci, V.; Buccheri, A.; Chiodi, G.; Di Marco, S.; Iacoangeli, F.; Martellotti, G.; Nobrega, R.; Pelosi, A.; Penso, G.; Pinci, D.; Rinaldi, W.; Rossi, A.; Santacesaria, R.; Satriano, C.; Carboni, G.; Iannilli, M.; Massafferri Rodrigues, A.; Messi, R.; Paoluzzi, G.; Sabatino, G.; Santovetti, E.; Satta, A.; Amoraal, J.; van Apeldoorn, G.; Arink, R.; van Bakel, N.; Band, H.; Bauer, Th; Berkien, A.; van Beuzekom, M.; Bos, E.; Bron, Ch; Ceelie, L.; Doets, M.; van der Eijk, R.; Fransen, J.-P.; de Groen, P.; Gromov, V.; Hierck, R.; Homma, J.; Hommels, B.; Hoogland, W.; Jans, E.; Jansen, F.; Jansen, L.; Jaspers, M.; Kaan, B.; Koene, B.; Koopstra, J.; Kroes, F.; Kraan, M.; Langedijk, J.; Merk, M.; Mos, S.; Munneke, B.; Palacios, J.; Papadelis, A.; Pellegrino, A.; van Petten, O.; du Pree, T.; Roeland, E.; Ruckstuhl, W.; Schimmel, A.; Schuijlenburg, H.; Sluijk, T.; Spelt, J.; Stolte, J.; Terrier, H.; Tuning, N.; Van Lysebetten, A.; Vankov, P.; Verkooijen, J.; Verlaat, B.; Vink, W.; de Vries, H.; Wiggers, L.; Ybeles Smit, G.; Zaitsev, N.; Zupan, M.; Zwart, A.; van den Brand, J.; Bulten, H. J.; de Jong, M.; Ketel, T.; Klous, S.; Kos, J.; M'charek, B.; Mul, F.; Raven, G.; Simioni, E.; Cheng, J.; Dai, G.; Deng, Z.; Gao, Y.; Gong, G.; Gong, H.; He, J.; Hou, L.; Li, J.; Qian, W.; Shao, B.; Xue, T.; Yang, Z.; Zeng, M.; Muryn, B.; Ciba, K.; Oblakowska-Mucha, A.; Blocki, J.; Galuszka, K.; Hajduk, L.; Michalowski, J.; Natkaniec, Z.; Polok, G.; Stodulski, M.; Witek, M.; Brzozowski, K.; Chlopik, A.; Gawor, P.; Guzik, Z.; Nawrot, A.; Srednicki, A.; Syryczynski, K.; Szczekowski, M.; Anghel, D. V.; Cimpean, A.; Coca, C.; Constantin, F.; Cristian, P.; Dumitru, D. D.; Dumitru, D. T.; Giolu, G.; Kusko, C.; Magureanu, C.; Mihon, Gh; Orlandea, M.; Pavel, C.; Petrescu, R.; Popescu, S.; Preda, T.; Rosca, A.; Rusu, V. L.; Stoica, R.; Stoica, S.; Tarta, P. D.; Filippov, S.; Gavrilov, Yu; Golyshkin, L.; Gushchin, E.; Karavichev, O.; Klubakov, V.; Kravchuk, L.; Kutuzov, V.; Laptev, S.; Popov, S.; Aref'ev, A.; Bobchenko, B.; Dolgoshein, V.; Egorychev, V.; Golutvin, A.; Gushchin, O.; Konoplyannikov, A.; Korolko, I.; Kvaratskheliya, T.; Machikhiliyan, I.; Malyshev, S.; Mayatskaya, E.; Prokudin, M.; Rusinov, D.; Rusinov, V.; Shatalov, P.; Shchutska, L.; Tarkovskiy, E.; Tayduganov, A.; Voronchev, K.; Zhiryakova, O.; Bobrov, A.; Bondar, A.; Eidelman, S.; Kozlinsky, A.; Shekhtman, L.; Beloous, K. S.; Dzhelyadin, R. I.; Gelitsky, Yu V.; Gouz, Yu P.; Kachnov, K. G.; Kobelev, A. S.; Matveev, V. D.; Novikov, V. P.; Obraztsov, V. F.; Ostankov, A. P.; Romanovsky, V. I.; Rykalin, V. I.; Soldatov, A. P.; Soldatov, M. M.; Tchernov, E. N.; Yushchenko, O. P.; Bochin, B.; Bondar, N.; Fedorov, O.; Golovtsov, V.; Guets, S.; Kashchuk, A.; Lazarev, V.; Maev, O.; Neustroev, P.; Sagidova, N.; Spiridenkov, E.; Volkov, S.; Vorobyev, An; Vorobyov, A.; Aguilo, E.; Bota, S.; Calvo, M.; Comerma, A.; Cano, X.; Dieguez, A.; Herms, A.; Lopez, E.; Luengo, S.; Garra, J.; Garrido, Ll; Gascon, D.; Gaspar de Valenzuela, A.; Gonzalez, C.; Graciani, R.; Grauges, E.; Perez Calero, A.; Picatoste, E.; Riera, J.; Rosello, M.; Ruiz, H.; Vilasis, X.; Xirgu, X.; Adeva, B.; Cid Vidal, X.; MartÉnez Santos, D.; Esperante Pereira, D.; Fungueiriño Pazos, J. L.; Gallas Torreira, A.; Gómez, C. Lois; Pazos Alvarez, A.; Pérez Trigo, E.; Pló Casasús, M.; Rodriguez Cobo, C.; Rodríguez Pérez, P.; Saborido, J. J.; Seco, M.; Vazquez Regueiro, P.; Bartalini, P.; Bay, A.; Bettler, M.-O.; Blanc, F.; Borel, J.; Carron, B.; Currat, C.; Conti, G.; Dormond, O.; Ermoline, Y.; Fauland, P.; Fernandez, L.; Frei, R.; Gagliardi, G.; Gueissaz, N.; Haefeli, G.; Hicheur, A.; Jacoby, C.; Jalocha, P.; Jimenez-Otero, S.; Hertig, J.-P.; Knecht, M.; Legger, F.; Locatelli, L.; Moser, J.-R.; Needham, M.; Nicolas, L.; Perrin-Giacomin, A.; Perroud, J.-P.; Potterat, C.; Ronga, F.; Schneider, O.; Schietinger, T.; Steele, D.; Studer, L.; Tareb, M.; Tran, M. T.; van Hunen, J.; Vervink, K.; Villa, S.; Zwahlen, N.; Bernet, R.; Büchler, A.; Gassner, J.; Lehner, F.; Sakhelashvili, T.; Salzmann, C.; Sievers, P.; Steiner, S.; Steinkamp, O.; Straumann, U.; van Tilburg, J.; Vollhardt, A.; Volyanskyy, D.; Ziegler, M.; Dovbnya, A.; Ranyuk, Yu; Shapoval, I.; Borisova, M.; Iakovenko, V.; Kyva, V.; Kovalchuk, O.; Okhrimenko, O.; Pugatch, V.; Pylypchenko, Yu; Adinolfi, M.; Brook, N. H.; Head, R. D.; Imong, J. P.; Lessnoff, K. A.; Metlica, F. C. D.; Muir, A. J.; Rademacker, J. H.; Solomin, A.; Szczypka, P. M.; Barham, C.; Buszello, C.; Dickens, J.; Gibson, V.; Haines, S.; Harrison, K.; Jones, C. R.; Katvars, S.; Kerzel, U.; Lazzeroni, C.; Li, Y. Y.; Rogers, G.; Storey, J.; Skottowe, H.; Wotton, S. A.; Adye, T. J.; Densham, C. J.; Easo, S.; Franek, B.; Loveridge, P.; Morrow, D.; Morris, J. V.; Nandakumar, R.; Nardulli, J.; Papanestis, A.; Patrick, G. N.; Ricciardi, S.; Woodward, M. L.; Zhang, Z.; Chamonal, R. J. U.; Clark, P. J.; Clarke, P.; Eisenhardt, S.; Gilardi, N.; Khan, A.; Kim, Y. M.; Lambert, R.; Lawrence, J.; Main, A.; McCarron, J.; Mclean, C.; Muheim, F.; Osorio-Oliveros, A. F.; Playfer, S.; Styles, N.; Xie, Y.; Bates, A.; Carson, L.; da Cunha Marinho, F.; Doherty, F.; Eklund, L.; Gersabeck, M.; Haddad, L.; Macgregor, A. A.; Melone, J.; McEwan, F.; Petrie, D. M.; Paterson, S. K.; Parkes, C.; Pickford, A.; Rakotomiaramanana, B.; Rodrigues, E.; Saavedra, A. F.; Soler, F. J. P.; Szumlak, T.; Viret, S.; Allebone, L.; Awunor, O.; Back, J.; Barber, G.; Barnes, C.; Cameron, B.; Clark, D.; Clark, I.; Dornan, P.; Duane, A.; Eames, C.; Egede, U.; Girone, M.; Greenwood, S.; Hallam, R.; Hare, R.; Howard, A.; Jolly, S.; Kasey, V.; Khaleeq, M.; Koppenburg, P.; Miller, D.; Plackett, R.; Price, D.; Reece, W.; Savage, P.; Savidge, T.; Simmons, B.; Vidal-Sitjes, G.; Websdale, D.; Affolder, A.; Anderson, J. S.; Biagi, S. F.; Bowcock, T. J. V.; Carroll, J. L.; Casse, G.; Cooke, P.; Donleavy, S.; Dwyer, L.; Hennessy, K.; Huse, T.; Hutchcroft, D.; Jones, D.; Lockwood, M.; McCubbin, M.; McNulty, R.; Muskett, D.; Noor, A.; Patel, G. D.; Rinnert, K.; Shears, T.; Smith, N. A.; Southern, G.; Stavitski, I.; Sutcliffe, P.; Tobin, M.; Traynor, S. M.; Turner, P.; Whitley, M.; Wormald, M.; Wright, V.; Bibby, J. H.; Brisbane, S.; Brock, M.; Charles, M.; Cioffi, C.; Gligorov, V. V.; Handford, T.; Harnew, N.; Harris, F.; John, M. J. J.; Jones, M.; Libby, J.; Martin, L.; McArthur, I. A.; Muresan, R.; Newby, C.; Ottewell, B.; Powell, A.; Rotolo, N.; Senanayake, R. S.; Somerville, L.; Soroko, A.; Spradlin, P.; Sullivan, P.; Stokes-Rees, I.; Topp-Jorgensen, S.; Xing, F.; Wilkinson, G.; Artuso, M.; Belyaev, I.; Blusk, S.; Lefeuvre, G.; Menaa, N.; Menaa-Sia, R.; Mountain, R.; Skwarnicki, T.; Stone, S.; Wang, J. C.; Abadie, L.; Aglieri-Rinella, G.; Albrecht, E.; André, J.; Anelli, G.; Arnaud, N.; Augustinus, A.; Bal, F.; Barandela Pazos, M. C.; Barczyk, A.; Bargiotti, M.; Batista Lopes, J.; Behrendt, O.; Berni, S.; Binko, P.; Bobillier, V.; Braem, A.; Brarda, L.; Buytaert, J.; Camilleri, L.; Cambpell, M.; Castellani, G.; Cataneo, F.; Cattaneo, M.; Chadaj, B.; Charpentier, P.; Cherukuwada, S.; Chesi, E.; Christiansen, J.; Chytracek, R.; Clemencic, M.; Closier, J.; Collins, P.; Colrain, P.; Cooke, O.; Corajod, B.; Corti, G.; D'Ambrosio, C.; Damodaran, B.; David, C.; de Capua, S.; Decreuse, G.; Degaudenzi, H.; Dijkstra, H.; Droulez, J.-P.; Duarte Ramos, D.; Dufey, J. P.; Dumps, R.; Eckstein, D.; Ferro-Luzzi, M.; Fiedler, F.; Filthaut, F.; Flegel, W.; Forty, R.; Fournier, C.; Frank, M.; Frei, C.; Gaidioz, B.; Gaspar, C.; Gayde, J.-C.; Gavillet, P.; Go, A.; Gracia Abril, G.; Graulich, J.-S.; Giudici, P.-A.; Guirao Elias, A.; Guglielmini, P.; Gys, T.; Hahn, F.; Haider, S.; Harvey, J.; Hay, B.; Hernando Morata, J.-A.; Herranz Alvarez, J.; van Herwijnen, E.; Hilke, H. J.; von Holtey, G.; Hulsbergen, W.; Jacobsson, R.; Jamet, O.; Joram, C.; Jost, B.; Kanaya, N.; Knaster Refolio, J.; Koestner, S.; Koratzinos, M.; Kristic, R.; Lacarrère, D.; Lasseur, C.; Lastovicka, T.; Laub, M.; Liko, D.; Lippmann, C.; Lindner, R.; Losasso, M.; Maier, A.; Mair, K.; Maley, P.; Mato Vila, P.; Moine, G.; Morant, J.; Moritz, M.; Moscicki, J.; Muecke, M.; Mueller, H.; Nakada, T.; Neufeld, N.; Ocariz, J.; Padilla Aranda, C.; Parzefall, U.; Patel, M.; Pepe-Altarelli, M.; Piedigrossi, D.; Pivk, M.; Pokorski, W.; Ponce, S.; Ranjard, F.; Riegler, W.; Renaud, J.; Roiser, S.; Rossi, A.; Roy, L.; Ruf, T.; Ruffinoni, D.; Saladino, S.; Sambade Varela, A.; Santinelli, R.; Schmelling, S.; Schmidt, B.; Schneider, T.; Schöning, A.; Schopper, A.; Seguinot, J.; Snoeys, W.; Smith, A.; Smith, A. C.; Somogyi, P.; Stoica, R.; Tejessy, W.; Teubert, F.; Thomas, E.; Toledo Alarcon, J.; Ullaland, O.; Valassi, A.; Vannerem, P.; Veness, R.; Wicht, P.; Wiedner, D.; Witzeling, W.; Wright, A.; Wyllie, K.; Ypsilantis, T.
2008-08-01
The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). The initial configuration and expected performance of the detector and associated systems, as established by test beam measurements and simulation studies, is described.
Explorer : des clés pour mieux comprendre la matière
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Jonathan R.
2011-02-14
Will the LHC upset theories of the infinitely small? Physicists would like the accelerator to shake the standard model. This theory of elementary particles and forces leaves many gray areas. The LHC and its experiments have been designed to enlighten them. [Le LHC va-t-il bouleverser les théories de l'infiniment petit ? Les physiciens aimeraient que l'accélérateur fasse trembler le modèle standard. Cette théorie des particules élémentaires et des forces laisse de nombreuses zones d'ombre. Le LHC et ses expériences ont été conçus pour les éclairer.
Machine Protection with a 700 MJ Beam
NASA Astrophysics Data System (ADS)
Baer, T.; Schmidt, R.; Wenninger, J.; Wollmann, D.; Zerlauth, M.
After the high luminosity upgrade of the LHC, the stored energy per proton beam will increase by a factor of two as compared to the nominal LHC. Therefore, many damage studies need to be revisited to ensure a safe machine operation with the new beam parameters. Furthermore, new accelerator equipment like crab cavities might cause new failure modes, which are not sufficiently covered by the current machine protection system of the LHC. These failure modes have to be carefully studied and mitigated by new protection systems. Finally the ambitious goals for integrated luminosity delivered to the experiments during the era of HL-LHC require an increase of the machine availability without jeopardizing equipment protection.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013-2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energymore » $$\\sqrt{s}=$$ 13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously.« less
Search for new neutral gauge bosons with the CMS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Lanyov, Alexander; Shmatov, Sergei; Zhizhin, Ilia
2018-04-01
A search for narrow resonances in dimuon invariant mass spectra has been performed using 13 fb-1 data obtained in 2016 from proton-proton collisions at √s = 13 TeV with the CMS experiment at the LHC. No evidence for physics beyond standard model is found. Limits on the production cross section and the masses of hypothetical particles that could appear in the scenarios of new physics have been set.
Web Proxy Auto Discovery for the WLCG
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.
2017-10-01
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.
Web Proxy Auto Discovery for the WLCG
Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...
2017-11-23
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Web Proxy Auto Discovery for the WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.; Blumenfeld, B.
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Single Event Burnout in DC-DC Converters for the LHC Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claudio H. Rivetta et al.
High voltage transistors in DC-DC converters are prone to catastrophic Single Event Burnout in the LHC radiation environment. This paper presents a systematic methodology to analyze single event effects sensitivity in converters and proposes solutions based on de-rating input voltage and output current or voltage.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
Development of a distributed control system for TOTEM experiment using ASIO Boost C++ libraries
NASA Astrophysics Data System (ADS)
Cafagna, F.; Mercadante, A.; Minafra, N.; Quinto, M.; Radicioni, E.
2014-06-01
The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. Those scientific objectives are achieved by using three tracking detectors symmetrically arranged around the interaction point called IP5. The control system is based on a C++ software that allows the user, by means of a graphical interface, direct access to hardware and handling of devices configuration. A first release of the software was designed as a monolithic block, with all functionalities being merged together. Such approach showed soon its limits, mainly poor reusability and maintainability of the source code, evident not only in phase of bug-fixing, but also when one wants to extend functionalities or apply some other modifications. This led to the decision of a radical redesign of the software, now based on the dialogue (message-passing) among separate building blocks. Thanks to the acquired extensibility, the software gained new features and now is a complete tool by which it is possible not only to configure different devices interfacing with a large subset of buses like I2C and VME, but also to do data acquisition both for calibration and physics runs. Furthermore, the software lets the user set up a series of operations to be executed sequentially to handle complex operations. To achieve maximum flexibility, the program units may be run either as a single process or as separate processes on different PCs which exchange messages over the network, thus allowing remote control of the system. Portability is ensured by the adoption of the ASIO (Asynchronous Input Output) library of Boost, a cross-platform suite of libraries which is candidate to become part of the C++ 11 standard. We present the state of the art of this project and outline the future perspectives. In particular, we describe the system architecture and the message-passing scheme. We also report on the results obtained in a first complete test of the software both as a single process and on two PCs.
Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector
NASA Astrophysics Data System (ADS)
Mayer, Joseph A., II
The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won't be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ's Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of the LS1 upgrade; to upgrading the old RODs from Run 1 to help them run more efficiently in Run 2. It also includes the research and development of FPGA based DAQs and integrated circuit emulators for the ITk upgrade which will occur during LS3 in 2025.
Commissioning the cryogenic system of the first LHC sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millet, F.; Claudet, S.; Ferlin, G.
2007-12-01
The LHC machine, composed of eight sectors with superconducting magnets and accelerating cavities, requires a complex cryogenic system providing high cooling capacities (18 kW equivalent at 4.5 K and 2.4 W at 1.8 K per sector produced in large cold boxes and distributed via 3.3-km cryogenic transfer lines). After individual reception tests of the cryogenic subsystems (cryogen storages, refrigerators, cryogenic transfer lines and distribution boxes) performed since 2000, the commissioning of the cryogenic system of the first LHC sector has been under way since November 2006. After a brief introduction to the LHC cryogenic system and its specificities, the commissioningmore » is reported detailing the preparation phase (pressure and leak tests, circuit conditioning and flushing), the cool-down sequences including the handling of cryogenic fluids, the magnet powering phase and finally the warm-up. Preliminary conclusions on the commissioning of the first LHC sector will be drawn with the review of the critical points already solved or still pending. The last part of the paper reports on the first operational experience of the LHC cryogenic system in the perspective of the commissioning of the remaining LHC sectors and the beam injection test.« less
NASA Astrophysics Data System (ADS)
Hennessy, Karol; LHCb VELO Upgrade Collaboration
2017-02-01
The upgrade of the LHCb experiment, scheduled for LHC Run-III, scheduled to start in 2021, will transform the experiment to a trigger-less system reading out the full detector at 40 MHz event rate. All data reduction algorithms will be executed in a high-level software farm enabling the detector to run at luminosities of 2×1033 cm-2 s-1. The Vertex Locator (VELO) is the silicon vertex detector surrounding the interaction region. The current detector will be replaced with a hybrid pixel system equipped with electronics capable of reading out at 40 MHz. The upgraded VELO will provide fast pattern recognition and track reconstruction to the software trigger. The silicon pixel sensors have 55×55 μm2 pitch, and are read out by the VeloPix ASIC, from the Timepix/Medipix family. The hottest region will have pixel hit rates of 900 Mhits/s yielding a total data rate of more than 3 Tbit/s for the upgraded VELO. The detector modules are located in a separate vacuum, separated from the beam vacuum by a thin custom made foil. The foil will be manufactured through milling and possibly thinned further by chemical etching. The material budget will be minimised by the use of evaporative CO2 coolant circulating in microchannels within 400 μm thick silicon substrates. The current status of the VELO upgrade is described and latest results from operation of irradiated sensor assemblies are presented.
2017 Topical Workshop on Electronics for Particle Physics
NASA Astrophysics Data System (ADS)
2017-09-01
The workshop will cover all aspects of electronics for particle physics experiments, and accelerator instrumentation of general interest to users. LHC experiments (and their operational experience) will remain a focus of the meeting but a strong emphasis on R&D for future experimentation will be maintained, such as SLHC, CLIC, ILC, neutrino facilities as well as other particle and astroparticle physics experiments. The purpose of the workshop is: To present results and original concepts for electronic research and development relevant to experiments as well as accelerator and beam instrumentation at future facilities; To review the status of electronics for the LHC experiments; To identify and encourage common efforts for the development of electronics; To promote information exchange and collaboration in the relevant engineering and physics communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Hock-Seng; /UC, Berkeley /LBL, Berkeley; Ibe, Masahiro
2009-06-19
Supersymmetric models with spontaneously broken approximate R-symmetry contains a light spin 0 particle, the R-axion. The properties of the particle can be a powerful probe of the structure of the new physics. In this paper, we discuss the possibilities of the R-axion detection at the LHC experiments. It is challenge to observe this light particle in the LHC environment. However, for typical values in which the mass of the R-axion is a few hundred MeV, we show that those particles can be detected by searching for displaced vertices from R-axion decay.
Status and Trends in Networking at LHC Tier1 Facilities
NASA Astrophysics Data System (ADS)
Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.
2012-12-01
The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization
Readiness of the ATLAS liquid argon calorimeter for LHC collisions
Aad, G.; Abbott, B.; Abdallah, J.; ...
2010-08-20
The ATLAS liquid argon calorimeter has been operating continuously since August 2006. At this time, only part of the calorimeter was readout, but since the beginning of 2008, all calorimeter cells have been connected to the ATLAS readout system in preparation for LHC collisions. This paper gives an overview of the liquid argon calorimeter performance measured in situ with random triggers, calibration data, cosmic muons, and LHC beam splash events. Results on the detector operation, timing performance, electronics noise, and gain stability are presented. High energy deposits from radiative cosmic muons and beam splash events allow to check the intrinsicmore » constant term of the energy resolution. The uniformity of the electromagnetic barrel calorimeter response along η (averaged over Φ) is measured at the percent level using minimum ionizing cosmic muons. Finally, studies of electromagnetic showers from radiative muons have been used to cross-check the Monte Carlo simulation. The performance results obtained using the ATLAS readout, data acquisition, and reconstruction software indicate that the liquid argon calorimeter is well-prepared for collisions at the dawn of the LHC era.« less
Explorer : des clés pour mieux comprendre la matière
Ellis, Jonathan R.
2018-01-12
Will the LHC upset theories of the infinitely small? Physicists would like the accelerator to shake the standard model. This theory of elementary particles and forces leaves many gray areas. The LHC and its experiments have been designed to enlighten them. [Le LHC va-t-il bouleverser les théories de l'infiniment petit ? Les physiciens aimeraient que l'accélérateur fasse trembler le modèle standard. Cette théorie des particules élémentaires et des forces laisse de nombreuses zones d'ombre. Le LHC et ses expériences ont été conçus pour les éclairer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Production and installation of the LHC low-beta triplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feher, S.; Bossert, R.; DiMarco, J.
2005-09-01
The LHC performance depends critically on the low-{beta}, triplets, located on either side of the four interaction points. Each triplet consists of four superconducting quadrupole magnets, which must operate reliably at up to 215 T/m, sustain extremely high heat loads and have an excellent field quality. A collaboration of CERN, Fermilab and KEK was formed in 1996 to design and build the triplet systems, and after nine years of joint effort the production has been completed in 2005. We retrace the main events of the project and present the design features and performance of the low-{beta} quadrupoles, built by KEKmore » and Fermilab, as well as of other vital elements of the triplet. The tunnel installation of the first triplet and plans for commissioning in the LHC are also presented. Apart from the excellent technical results, the construction of the LHC low-{beta} triplets has been a highly enriching experience combining harmoniously the different competences and approaches to engineering in a style reminiscent of high energy physics experiment collaborations, and rarely before achieved in construction of an accelerator.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; ...
2017-08-08
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problemmore » thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. Furthermore, we will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.« less
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
NASA Astrophysics Data System (ADS)
Farrell, Steven; Anderson, Dustin; Calafiura, Paolo; Cerati, Giuseppe; Gray, Lindsey; Kowalkowski, Jim; Mudigonda, Mayur; Prabhat; Spentzouris, Panagiotis; Spiropoulou, Maria; Tsaris, Aristeidis; Vlimant, Jean-Roch; Zheng, Stephan
2017-08-01
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data.
Kikoła, Daniel; Echevarria, Miguel GarcÃÂa; Hadjidakis, Cynthia; ...
2017-05-17
Measurement of Single Transverse-Spin Asymmetrymore » $$A_N$$ for various quarkonia states and Drell-Yan lepton pairs can shed light on the orbital angular momentum of quarks and gluons, a fundamental ingredient of the spin puzzle of the proton. The AFTER@LHC experiment combines a unique kinematic coverage and large luminosities of the Large Hadron Collider beams to deliver precise measurements, complementary to the knowledge provided by collider experiments such as RHIC. Here, we report on sensitivity studies for $$J/\\Psi$$, $$\\Upsilon$$ and Drell-Yan $$A_N$$ done using the performance of a LHCb-like and ALICE-like detectors, combined with a polarised hydrogen and $^3$He target. Particularly, such research will provide new insights and knowledge about transverse-momentum-dependent parton distribution functions for quarks and gluons and on twist-3 collinear matrix elements in a proton and a neutron.« less
Test of Relativistic Gravity for Propulsion at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Felber, Franklin
2010-01-01
A design is presented of a laboratory experiment that could test the suitability of relativistic gravity for propulsion of spacecraft to relativistic speeds. An exact time-dependent solution of Einstein's gravitational field equation confirms that even the weak field of a mass moving at relativistic speeds could serve as a driver to accelerate a much lighter payload from rest to a good fraction of the speed of light. The time-dependent field of ultrarelativistic particles in a collider ring is calculated. An experiment is proposed as the first test of the predictions of general relativity in the ultrarelativistic limit by measuring the repulsive gravitational field of bunches of protons in the Large Hadron Collider (LHC). The estimated `antigravity beam' signal strength at a resonant detector of each proton bunch is 3 nm/s2 for 2 ns during each revolution of the LHC. This experiment can be performed off-line, without interfering with the normal operations of the LHC.
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Achieving production-level use of HEP software at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.
2015-12-01
HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
Managing a tier-2 computer centre with a private cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara
2014-06-01
In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.
NASA Astrophysics Data System (ADS)
Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław
2017-08-01
Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.
Searching for long-lived particles: A compact detector for exotics at LHCb
NASA Astrophysics Data System (ADS)
Gligorov, Vladimir V.; Knapen, Simon; Papucci, Michele; Robinson, Dean J.
2018-01-01
We advocate for the construction of a new detector element at the LHCb experiment, designed to search for displaced decays of beyond Standard Model long-lived particles, taking advantage of a large shielded space in the LHCb cavern that is expected to soon become available. We discuss the general features and putative capabilities of such an experiment, as well as its various advantages and complementarities with respect to the existing LHC experiments and proposals such as SHiP and MATHUSLA. For two well-motivated beyond Standard Model benchmark scenarios—Higgs decay to dark photons and B meson decays via a Higgs mixing portal—the reach either complements or exceeds that predicted for other LHC experiments.
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre
2016-02-01
After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.
Muons in the CMS High Level Trigger System
NASA Astrophysics Data System (ADS)
Verwilligen, Piet; CMS Collaboration
2016-04-01
The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.
Probing high scale physics with top quarks at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Dong, Zhe
With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV.
ALICE and "The state of matter" at LHC
Schukraft, Juergen
2018-04-26
Assembly and installation of ALICE, the LHC heavy ion experiment dedicated to the study of matter at extreme temperature and pressure, is nearing completion and the commissioning of the detector is well under way. A good time to look back, to the making of ALICE, and to look forward, to the first physics with proton and heavy ion beams.
Automatic Tools for Enhancing the Collaborative Experience in Large Projects
NASA Astrophysics Data System (ADS)
Bourilkov, D.; Rodriquez, J. L.
2014-06-01
With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what is being done and how - for private use, reuse, and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if the log is automatically created on the fly while the scientist or software developer is working in a habitual way, without the need for extra efforts. This saves time and enables a team to do more with the same resources. The CODESH - COllaborative DEvelopment SHell - and CAVES - Collaborative Analysis Versioning Environment System projects address this problem in a novel way. They build on the concepts of virtual states and transitions to enhance the collaborative experience by providing automatic persistent virtual logbooks. CAVES is designed for sessions of distributed data analysis using the popular ROOT framework, while CODESH generalizes the approach for any type of work on the command line in typical UNIX shells like bash or tcsh. Repositories of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions or sessions shared within or between collaborating groups. A typical use case is building working scalable systems for analysis of Petascale volumes of data as encountered in the LHC experiments. Our approach is general enough to find applications in many fields.
None
2018-06-26
The LHC official inauguration will take place from 14h00 to 18h00, at Point 18 of the Laboratory, in the presence of the highest representatives from the member states of CERN and representatives from the other communities and authorities of the countries participating in the LHC adventure. 300 members from the international press are also expected, giving a total of 1500 guests. The ceremony will be broadcast live in the Laboratoryâs main conference rooms, via webcast and satellite TV (Eurovision). The LHC-fest will follow in the evening in the same place. Its purpose is to, "thank all the actors â physicists, engineers, technicians and administrators â who took part in the design, construction, implementation and commissioning of this great enterprise." For obvious logistical reasons, it has been necessary to limit the number of invited guests to 3000, to include all members of personnel (blue badge holders), representatives of the LHC experiments and other users, as well as representatives from retired staff and industrial support.
Seesaw at Lhc Through Left-Right Symmetry
NASA Astrophysics Data System (ADS)
Senjanović, Goran
I argue that LHC may shed light on the nature of neutrino mass through the probe of the seesaw mechanism. The smoking gun signature is lepton number violation through the production of same sign lepton pairs, a collider analogy of the neutrinoless double beta decay. I discuss this in the context of left-right symmetric theories, which led originally to neutrino mass and the seesaw mechanism. A WR gauge boson with a mass in a few TeV region could easily dominate neutrinoless double beta decay, and its discovery at LHC would have spectacular signatures of parity restoration and lepton number violation. Moreover, LHC can measure the masses of the right-handed neutrinos and the right-handed leptonic mixing matrix, which could in turn be used to predict the rates for neutrinoless double decay and lepton flavor violating violating processes. The LR scale at the LHC energies offers great hope of observing these low energy processes in the present and upcoming experiments.
NASA Astrophysics Data System (ADS)
Adinolfi, M.; Archilli, F.; Baldini, W.; Baranov, A.; Derkach, D.; Panin, A.; Pearce, A.; Ustyuzhanin, A.
2017-10-01
Data quality monitoring, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the experimental apparatus during the data taking. DQM at LHCb is carried out in two phases. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data selected by the LHCb trigger system and occurs later. For the LHC Run II data taking the LHCb collaboration has re-engineered the DQM protocols and the DQM graphical interface, moving the latter to a web-based monitoring system, called Monet, thus allowing researchers to perform the second phase off-site. In order to support the operator’s task, Monet is also equipped with an automated, fully configurable alarm system, thus allowing its use not only for DQM purposes, but also to track and assess the quality of LHCb software and simulation over time.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
Optimizing CMS build infrastructure via Apache Mesos
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad
2015-12-01
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.
Characterising dark matter searches at colliders and direct detection experiments: Vector mediators
Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; ...
2015-01-09
We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: m DM, M med , g DM and g q, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework canmore » be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
A review of advances in pixel detectors for experiments with high rate and radiation
NASA Astrophysics Data System (ADS)
Garcia-Sciveres, Maurice; Wermes, Norbert
2018-06-01
The large Hadron collider (LHC) experiments ATLAS and CMS have established hybrid pixel detectors as the instrument of choice for particle tracking and vertexing in high rate and radiation environments, as they operate close to the LHC interaction points. With the high luminosity-LHC upgrade now in sight, for which the tracking detectors will be completely replaced, new generations of pixel detectors are being devised. They have to address enormous challenges in terms of data throughput and radiation levels, ionizing and non-ionizing, that harm the sensing and readout parts of pixel detectors alike. Advances in microelectronics and microprocessing technologies now enable large scale detector designs with unprecedented performance in measurement precision (space and time), radiation hard sensors and readout chips, hybridization techniques, lightweight supports, and fully monolithic approaches to meet these challenges. This paper reviews the world-wide effort on these developments.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
Searching for long-lived particles: A compact detector for exotics at LHCb
Gligorov, Vladimir V.; Knapen, Simon; Papucci, Michele; ...
2018-01-31
We advocate for the construction of a new detector element at the LHCb experiment, designed to search for displaced decays of beyond Standard Model long-lived particles, taking advantage of a large shielded space in the LHCb cavern that is expected to soon become available. We discuss the general features and putative capabilities of such an experiment, as well as its various advantages and complementarities with respect to the existing LHC experiments and proposals such as SHiP and MATHUSLA. For two well-motivated beyond Standard Model benchmark scenarios—Higgs decay to dark photons and B meson decays via a Higgs mixing portal—the reachmore » either complements or exceeds that predicted for other LHC experiments.« less
The new CMS DAQ system for run-2 of the LHC
Bawej, Tomasz; Behrens, Ulf; Branson, James; ...
2015-05-21
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in ordermore » to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of ~ 4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. As a result, this paper presents the requirements, technical choices, and performance of the new system.« less
MSSM A-funnel and the galactic center excess: prospects for the LHC and direct detection experiments
Freese, Katherine; López, Alejandro; Shah, Nausheen R.; ...
2016-04-11
The pseudoscalar resonance or “A-funnel” in the Minimal Supersymmetric Standard Model (MSSM) is a widely studied framework for explaining dark matter that can yield interesting indirect detection and collider signals. The well-known Galactic Center excess (GCE) at GeV energies in the gamma ray spectrum, consistent with annihilation of a ≲ 40 GeV dark matter particle, has more recently been shown to be compatible with significantly heavier masses following reanalysis of the background.For this study, we explore the LHC and direct detection implications of interpreting the GCE in this extended mass window within the MSSM A-funnel framework. We find that compatibilitymore » with relic density, signal strength, collider constraints, and Higgs data can be simultaneously achieved with appropriate parameter choices. The compatible regions give very sharp predictions of 200-600 GeV CP-odd/even Higgs bosons at low tan β at the LHC and spin-independent cross sections ≈ 10 -11 pb at direct detection experiments. Finally, regardless of consistency with the GCE, this study serves as a useful template of the strong correlations between indirect, direct, and LHC signatures of the MSSM A-funnel region.« less
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
NASA Astrophysics Data System (ADS)
Schmidt, Burkhard
2016-04-01
In the second phase of the LHC physics program, the accelerator will provide an additional integrated luminosity of about 2500/fb over 10 years of operation to the general purpose detectors ATLAS and CMS. This will substantially enlarge the mass reach in the search for new particles and will also greatly extend the potential to study the properties of the Higgs boson discovered at the LHC in 2012. In order to meet the experimental challenges of unprecedented pp luminosity, the experiments will need to address the aging of the present detectors and to improve the ability to isolate and precisely measure the products of the most interesting collisions. The lectures gave an overview of the physics motivation and described the conceptual designs and the expected performance of the upgrades of the four major experiments, ALICE, ATLAS, CMS and LHCb, along with the plans to develop the appropriate experimental techniques and a brief overview of the accelerator upgrade. Only some key points of the upgrade program of the four major experiments are discussed in this report; more information can be found in the references given at the end.
Opportunistic Resource Usage in CMS
NASA Astrophysics Data System (ADS)
Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.; Gutsche, O.; Tadel, M.; Sfiligoi, I.; Letts, J.; Wuerthwein, F.; McCrea, A.; Bockelman, B.; Fajardo, E.; Linares, L.; Wagner, R.; Konstantinov, P.; Blumenfeld, B.; Bradley, D.; Cms Collaboration
2014-06-01
CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.
Big Data in HEP: A comprehensive use case study
NASA Astrophysics Data System (ADS)
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter; Jayatilaka, Bo; Kowalkowski, Jim; Pivarski, Jim; Sehrish, Saba; Mantilla Surez, Cristina; Svyatkovskiy, Alexey; Tran, Nhan
2017-10-01
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
Heavy-ion physics with the ALICE experiment at the CERN Large Hadron Collider.
Schukraft, J
2012-02-28
After close to 20 years of preparation, the dedicated heavy-ion experiment A Large Ion Collider Experiment (ALICE) took first data at the CERN Large Hadron Collider (LHC) accelerator with proton collisions at the end of 2009 and with lead nuclei at the end of 2010. After a short introduction into the physics of ultra-relativistic heavy-ion collisions, this article recalls the main design choices made for the detector and summarizes the initial operation and performance of ALICE. Physics results from this first year of operation concentrate on characterizing the global properties of typical, average collisions, both in proton-proton (pp) and nucleus-nucleus reactions, in the new energy regime of the LHC. The pp results differ, to a varying degree, from most quantum chromodynamics-inspired phenomenological models and provide the input needed to fine tune their parameters. First results from Pb-Pb are broadly consistent with expectations based on lower energy data, indicating that high-density matter created at the LHC, while much hotter and larger, still behaves like a very strongly interacting, almost perfect liquid.
Integration of XRootD into the cloud infrastructure for ALICE data analysis
NASA Astrophysics Data System (ADS)
Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey
2015-12-01
Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.
Benchmark studies of induced radioactivity produced in LHC materials, Part I: Specific activities.
Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H
2005-01-01
Samples of materials which will be used in the LHC machine for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy reference field facility. After irradiation, the specific activities induced in the various samples were analysed with a high-precision gamma spectrometer at various cooling times, allowing identification of isotopes with a wide range of half-lives. Furthermore, the irradiation experiment was simulated in detail with the FLUKA Monte Carlo code. A comparison of measured and calculated specific activities shows good agreement, supporting the use of FLUKA for estimating the level of induced activity in the LHC.
Search for New Phenomena Using W/Z + (b)-Jets Measurements Performed with the ATLAS Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beauchemin, Pierre-Hugues
2015-06-30
The Project proposed to use data of the ATLAS experiment, obtained during the 2011 and 2012 data-taking campaigns, to pursue studies of the strong interaction (QCD) and to examine promising signatures for new physics. The Project also contains a service component dedicated to a detector development initiative. The objective of the strong interaction studies is to determine how various predictions from the main theory (QCD) compare to the data. Results of a set of measurements developed by the Tufts team indicate that the dominant factor of discrepancy between data and QCD predictions come from the mis-modeling of the low energymore » gluon radiation as described by algorithms called parton showers. The discrepancies introduced by parton showers on LHC predictions could even be larger than the effect due to completely new phenomena (dark matter, supersymmetry, etc.) and could thus block further discoveries at the LHC. Some of the results obtained in the course of this Project also specify how QCD predictions must be improved in order to open the possibility for the discovery of something completely new at the LHC during Run-II. This has been integrated in the Run-II ATLAS physics program. Another objective of Tufts studies of the strong interaction was to determine how the hypothesis about an intrinsic heavy-quark component of the proton (strange, charm or bottom quarks) could be tested at the LHC. This hypothesis has been proposed by theorists 30 years ago and is still controversial. The Tufts team demonstrated that intrinsic charms can be observed, or severely constrained, at the LHC, and determine how the measurement should be performed in order to maximize its sensitivity to such an intrinsic heavy-quark component of the proton. Tufts also embarked on performing the measurement that is in progress, but final results are not yet available. They should shade a light of understanding on the fundamental structure of the proton. Determining the nature of dark matter particles, composing about 25% of all the matter in the universe, is one of the most exciting research goals at the LHC. Within this Project, the Tufts team proposed a way to improve over the standard approach used to look for dark matter at the LHC in events involving jets and a large amount of unbalanced energy in the detector (jets+ETmiss). The Tufts team has developed a measurement to test these improvements on data available (ATLAS 2012 dataset), in order to be ready to apply them on the new Run-II data that will be available at the end of 2015. Preliminary results on the proposed measurement indicate that a very high precision can be obtained on results free of detector effects. That will allow for better constrains of dark matter theories and will spare the needs for huge computing resources in order to compare dark matter theories to data. Finally, the Tufts team played a leading role in the development and the organization of the 6Et trigger, the detector component needed to collect the data used in dark matter searches and in many other analyses. The team compared the performance of the various algorithms capable of reconstructing the value of the ETmiss on each LHC collision event, and developed a strategy to commission these algorithms online. Tufts also contributed in the development of the ETmiss trigger monitoring software. Finally, the PI of this Project acted as the co-coordinator of the group of researchers at CERN taking care of the development and the operation of this detector component. The ETmiss trigger is now taking data, opening the possibility for the discovery of otherwise undetectable particles at the LHC.« less
Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; ...
2017-01-23
Here, we propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the “clockwork” mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.
KASCADE-Grande: Composition studies in the view of the post-LHC hadronic interaction models
NASA Astrophysics Data System (ADS)
Haungs, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Pierro, F. Di; Doll, P.; Engel, R.; Fuhrmann, D.; Gherghel-Lascu, A.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Heck, D.; Hörandel, J. R.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2017-06-01
The KASCADE-Grande experiment has significantly contributed to the current knowledge about the energy spectrum and composition of cosmic rays for energies between the knee and the ankle. Meanwhile, post-LHC versions of the hadronic interaction models are available and used to interpret the entire data set of KASCADE-Grande. In addition, a new, combined analysis of both arrays, KASCADE and Grande, was developed significantly increasing the accuracy of the shower observables. First results of the new analysis with the entire data set of the KASCADE-Grande experiment will be the focus of this contribution.
Commissioning of a CERN Production and Analysis Facility Based on xrootd
NASA Astrophysics Data System (ADS)
Campana, Simone; van der Ster, Daniel C.; Di Girolamo, Alessandro; Peters, Andreas J.; Duellmann, Dirk; Coelho Dos Santos, Miguel; Iven, Jan; Bell, Tim
2011-12-01
The CERN facility hosts the Tier-0 of the four LHC experiments, but as part of WLCG it also offers a platform for production activities and user analysis. The CERN CASTOR storage technology has been extensively tested and utilized for LHC data recording and exporting to external sites according to experiments computing model. On the other hand, to accommodate Grid data processing activities and, more importantly, chaotic user analysis, it was realized that additional functionality was needed including a different throttling mechanism for file access. This paper will describe the xroot-based CERN production and analysis facility for the ATLAS experiment and in particular the experiment use case and data access scenario, the xrootd redirector setup on top of the CASTOR storage system, the commissioning of the system and real life experience for data processing and data analysis.
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
Monitoring of computing resource use of active software releases at ATLAS
NASA Astrophysics Data System (ADS)
Limosani, Antonio; ATLAS Collaboration
2017-10-01
The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.
NASA Astrophysics Data System (ADS)
Chlebana, Frank; CMS Collaboration
2017-11-01
The challenges of the High-Luminosity LHC (HL-LHC) are driven by the large number of overlapping proton-proton collisions (pileup) in each bunch-crossing and the extreme radiation dose to detectors at high pseudorapidity. To overcome this challenge CMS is developing an endcap electromagnetic+hadronic sampling calorimeter employing silicon sensors in the electromagnetic and front hadronic sections, comprising over 6 million channels, and highly-segmented plastic scintillators in the rear part of the hadronic section. This High- Granularity Calorimeter (HGCAL) will be the first of its kind used in a colliding beam experiment. Clustering deposits of energy over many cells and layers is a complex and challenging computational task, particularly in the high-pileup environment of HL-LHC. Baseline detector performance results are presented for electromagnetic and hadronic objects, and studies demonstrating the advantages of fine longitudinal and transverse segmentation are explored.
The CMS Tier0 goes cloud and grid for LHC Run 2
Hufnagel, Dirk
2015-12-23
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less
LHC signals of radiatively-induced neutrino masses and implications for the Zee-Babu model
NASA Astrophysics Data System (ADS)
Alcaide, Julien; Chala, Mikael; Santamaria, Arcadi
2018-04-01
Contrary to the see-saw models, extended Higgs sectors leading to radiatively-induced neutrino masses do require the extra particles to be at the TeV scale. However, these new states have often exotic decays, to which experimental LHC searches performed so far, focused on scalars decaying into pairs of same-sign leptons, are not sensitive. In this paper we show that their experimental signatures can start to be tested with current LHC data if dedicated multi-region analyses correlating different observables are used. We also provide high-accuracy estimations of the complicated Standard Model backgrounds involved. For the case of the Zee-Babu model, we show that regions not yet constrained by neutrino data and low-energy experiments can be already probed, while most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC.
NASA Astrophysics Data System (ADS)
Trzeciak, B.; Da Silva, C.; Ferreiro, E. G.; Hadjidakis, C.; Kikola, D.; Lansberg, J. P.; Massacrier, L.; Seixas, J.; Uras, A.; Yang, Z.
2017-09-01
We outline the case for heavy-ion-physics studies using the multi-TeV lead LHC beams in the fixed-target mode. After a brief contextual reminder, we detail the possible contributions of AFTER@LHC to heavy-ion physics with a specific emphasis on quarkonia. We then present performance simulations for a selection of observables. These show that Υ (nS), J/ψ and ψ (2S) production in heavy-ion collisions can be studied in new energy and rapidity domains with the LHCb and ALICE detectors. We also discuss the relevance to analyse the Drell-Yan pair production in asymmetric nucleus-nucleus collisions to study the factorisation of the nuclear modification of partonic densities and of further quarkonium states to restore their status of golden probes of the quark-gluon plasma formation.
NASA Astrophysics Data System (ADS)
Cauchi, Marija; Aberle, O.; Assmann, R. W.; Bertarelli, A.; Carra, F.; Cornelis, K.; Dallocchio, A.; Deboy, D.; Lari, L.; Redaelli, S.; Rossi, A.; Salvachua, B.; Mollicone, P.; Sammut, N.
2014-02-01
The correct functioning of a collimation system is crucial to safely operate highly energetic particle accelerators, such as the Large Hadron Collider (LHC). The requirements to handle high intensity beams can be demanding. In this respect, investigating the consequences of LHC particle beams hitting tertiary collimators (TCTs) in the experimental regions is a fundamental issue for machine protection. An experimental test was designed to investigate the robustness and effects of beam accidents on a fully assembled collimator, based on accident scenarios in the LHC. This experiment, carried out at the CERN High-Radiation to Materials (HiRadMat) facility, involved 440 GeV proton beam impacts of different intensities on the jaws of a horizontal TCT. This paper presents the experimental setup and the preliminary results obtained, together with some first outcomes from visual inspection and a comparison of such results with numerical simulations.
The CMS TierO goes Cloud and Grid for LHC Run 2
NASA Astrophysics Data System (ADS)
Hufnagel, Dirk
2015-12-01
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.
Singlet-triplet fermionic dark matter and LHC phenomenology
NASA Astrophysics Data System (ADS)
Choubey, Sandhya; Khan, Sarif; Mitra, Manimala; Mondal, Subhadeep
2018-04-01
It is well known that for the pure standard model triplet fermionic WIMP-type dark matter (DM), the relic density is satisfied around 2 TeV. For such a heavy mass particle, the production cross-section at 13 TeV run of LHC will be very small. Extending the model further with a singlet fermion and a triplet scalar, DM relic density can be satisfied for even much lower masses. The lower mass DM can be copiously produced at LHC and hence the model can be tested at collider. For the present model we have studied the multi jet (≥ 2 j) + missing energy ([InlineEquation not available: see fulltext.]) signal and show that this can be detected in the near future of the LHC 13 TeV run. We also predict that the present model is testable by the earth based DM direct detection experiments like Xenon-1T and in future by Darwin.
A rationale for long-lived quarks and leptons at the LHC: low energy flavour theory
NASA Astrophysics Data System (ADS)
Éboli, O. J. P.; Savoy, C. A.; Funchal, R. Zukanovich
2012-02-01
In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.
NASA Astrophysics Data System (ADS)
Justus, Christopher
2005-04-01
In this study, we simulated top-antitop (tt-bar) quark events at the Compact Muon Solenoid (CMS), an experiment presently being constructed at the Large Hadron Collider in Geneva, Switzerland. The tt-bar process is an important background for Higgs events. We used a chain of software to simulate and reconstruct processes that will occur inside the detector. CMKIN was used to generate and store Monte Carlo Events. OSCAR, a GEANT4 based CMS detector simulator, was used to simulate the CMS detector and how particles would interact with the detector. Next, we used ORCA to simulate the response of the readout electronics at CMS. Last, we used the Jet/MET Root maker to create root files of jets and missing energy. We are now using this software analysis chain to complete a systematic study of initial state radiation at hadron colliders. This study is essential because tt-bar is the main background for the Higgs boson and these processes are extremely sensitive to initial state radiation. Results of our initial state radiation study will be presented. We started this study at the new LHC Physics Center (LPC) located at Fermi National Accelerator Laboratory, and we are now completing the study at the University of Rochester.
AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading
NASA Astrophysics Data System (ADS)
Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration
2017-10-01
ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.
HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.
2017-10-01
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.
Sirunyan, Albert M; et al.
2018-06-19
The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013-2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energymore » $$\\sqrt{s}=$$ 13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously. We dedicate this paper to the memory of Prof. Alberto Benvenuti, whose work was fundamental for the CMS muon detector.« less
Readiness of the ATLAS Trigger and Data Acquisition system for the first LHC beams
NASA Astrophysics Data System (ADS)
Vandelli, W.; Atlas Tdaq Collaboration
2009-12-01
The ATLAS Trigger and Data Acquisition (TDAQ) system is based on O(2k) processing nodes, interconnected by a multi-layer Gigabit network, and consists of a combination of custom electronics and commercial products. In its final configuration, O(20k) applications will provide the needed capabilities in terms of event selection, data flow, local storage and data monitoring. In preparation for the first LHC beams, many TDAQ sub-systems already reached the final configuration and roughly one third of the final processing power has been deployed. Therefore, the current system allows for a sensible evaluation of the performance and scaling properties. In this paper we introduce the ATLAS TDAQ system requirements and architecture and we discuss the status of software and hardware component. We moreover present the results of performance measurements validating the system design and providing a figure for the ATLAS data acquisition capabilities in the initial data taking period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013-2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energymore » $$\\sqrt{s}=$$ 13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously. We dedicate this paper to the memory of Prof. Alberto Benvenuti, whose work was fundamental for the CMS muon detector.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Yvonne
2011-12-01
Since its discovery in 1995 by the CDF and D0 collaborations at the Fermilab Tevatron collider, the top quark has undergone intensive studies. Besides the Tevatron experiments, with the start of the LHC in 2010 a top quark factory started its operation. It is now possible to measure top quark properties simultaneously at four different experiments, namely ATLAS and CMS at LHC and CDF and D0 at Tevatron. Having collected thousands of top quarks each, several top quark properties have been measured precisely, while others are being measured for the first time. In this article, recent measurements of top quarkmore » properties from ATLAS, CDF, CMS and D0 are presented, using up to 5.4 fb{sup -1} of integrated luminosity at the Tevatron and 1.1 fb{sup -1} at the LHC. In particular, measurements of the top quark mass, mass difference, foward backward charge asymmetry, t{bar t} spin correlations, the ratio of branching fractions, W helicity, anomalous couplings, color flow and the search for flavor changing neutral currents are discussed.« less
Studies of QCD structure in high-energy collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadolsky, Pavel M.
2016-06-26
”Studies of QCD structure in high-energy collisions” is a research project in theoretical particle physics at Southern Methodist University funded by US DOE Award DE-SC0013681. The award furnished bridge funding for one year (2015/04/15-2016/03/31) between the periods funded by Nadolsky’s DOE Early Career Research Award DE-SC0003870 (in 2010-2015) and a DOE grant DE-SC0010129 for SMU Department of Physics (starting in April 2016). The primary objective of the research is to provide theoretical predictions for Run-2 of the CERN Large Hadron Collider (LHC). The LHC physics program relies on state-of-the-art predictions in the field of quantum chromodynamics. The main effort ofmore » our group went into the global analysis of parton distribution functions (PDFs) employed by the bulk of LHC computations. Parton distributions describe internal structure of protons during ultrarelivistic collisions. A new generation of CTEQ parton distribution functions (PDFs), CT14, was released in summer 2015 and quickly adopted by the HEP community. The new CT14 parametrizations of PDFs were obtained using benchmarked NNLO calculations and latest data from LHC and Tevatron experiments. The group developed advanced methods for the PDF analysis and estimation of uncertainties in LHC predictions associated with the PDFs. We invented and refined a new ’meta-parametrization’ technique that streamlines usage of PDFs in Higgs boson production and other numerous LHC processes, by combining PDFs from various groups using multivariate stochastic sampling. In 2015, the PDF4LHC working group recommended to LHC experimental collaborations to use ’meta-parametrizations’ as a standard technique for computing PDF uncertainties. Finally, to include new QCD processes into the global fits, our group worked on several (N)NNLO calculations.« less
ATLAS, CMS and new challenges for public communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Lucas; Barney, David; Goldfarb, Steven
On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, inmore » the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.« less
ATLAS, CMS and New Challenges for Public Communication
NASA Astrophysics Data System (ADS)
Taylor, Lucas; Barney, David; Goldfarb, Steven
2011-12-01
On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.
The MoEDAL Experiment at the Lhc — a New Light on the High Energy Frontier
NASA Astrophysics Data System (ADS)
Pinfold, James L.
2014-01-01
In 2010, the CERN (European Centre for Particle Physics Research) Research Board unanimously approved MoEDAL, the seventh international experiment at the Large Hadron Collider (LHC), which is designed to search for avatars of new physics signified by highly ionizing particles. A MoEDAL discovery would have revolutionary implications for our understanding of the microcosm, providing insights into such fundamental questions as: do magnetic monopoles exist, are there extra dimensions or new symmetries of nature; what is the mechanism for the generation of mass; what is the nature of dark matter and how did the big bang unfurl at the earliest times.
The MoEDAL Experiment at the Lhc -- a New Light on the High Energy Frontier
NASA Astrophysics Data System (ADS)
Pinfold, James L.
2014-04-01
In 2010, the CERN (European Centre for Particle Physics Research) Research Board unanimously approved MoEDAL, the seventh international experiment at the Large Hadron Collider (LHC), which is designed to search for avatars of new physics signified by highly ionizing particles. A MoEDAL discovery would have revolutionary implications for our understanding of the microcosm, providing insights into such fundamental questions as: do magnetic monopoles exist, are there extra dimensions or new symmetries of nature; what is the mechanism for the generation of mass; what is the nature of dark matter and how did the big bang unfurl at the earliest times.
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
What we can expect from the first year of the LHC
NASA Astrophysics Data System (ADS)
Trigger, Isabel
2009-05-01
The ATLAS and CMS experiments at the CERN Large Hadron Collider have been built and commissioned over more than a decade. They are the most complex experiments ever assembled, but were completed in time for the first beams in the LHC in September 2008. The accident which interrupted the LHC startup did not interrupt the commissioning of the detectors with cosmic ray events, and the small amount of single-beam data collected in September was invaluable for timing in the detector. ATLAS and CMS will therefore be unusually well calibrated and understood by the time collision data become available in Fall 2009. The first part of the talk will discuss the expected performance of the detectors (with some bias towards ATLAS). The rest of the talk will discuss physics analyses which should be possible with the first year's running at the LHC. Roughly 100-200 pb-1 at a 10 TeV centre-of-mass energy are needed to match the Tevatron's Standard Model Higgs sensitivity around 160 GeV - if all goes according to plan, the LHC may collect this by Fall 2010. About 100 pb-1 at 10 TeV would match the full Tevatron sample of top quarks; roughly twice as much data would be needed if the run were mainly at 8 TeV. Sensitivity to W' or Z' resonances would match the Tevatron's with less than 100 pb-1 at 8 TeV. Prospects for discovering supersymmetry are even more promising: in some models as little as 10 pb-1 at 8 TeV could yield a 5 σ discovery. The next year is expected to be a critical period in defining the future of high energy physics, as the actual performance of the LHC and its detectors is tested with collision data. Discoveries of physics beyond the Standard Model could potentially be made by the end of the first year's running, especially if the start-up progresses smoothly.
Electromagnetic dipole moments of charged baryons with bent crystals at the LHC
NASA Astrophysics Data System (ADS)
Bagli, E.; Bandiera, L.; Cavoto, G.; Guidi, V.; Henry, L.; Marangotto, D.; Martinez Vidal, F.; Mazzolari, A.; Merli, A.; Neri, N.; Ruiz Vidal, J.
2017-12-01
We propose a unique program of measurements of electric and magnetic dipole moments of charm, beauty and strange charged baryons at the LHC, based on the phenomenon of spin precession of channeled particles in bent crystals. Studies of crystal channeling and spin precession of positively- and negatively-charged particles are presented, along with feasibility studies and expected sensitivities for the proposed experiment using a layout based on the LHCb detector.
Catching Collisions in the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fruguiele, Claudia; Hirschauer, Jim
Now that the Large Hadron Collider has officially turned back on for its second run, within every proton collision could emerge the next new discovery in particle physics. Learn how the detectors on the Compact Muon Solenoid, or CMS, experiment capture and track particles as they are expelled from a collision. Talking us through these collisions are Claudia Fruguiele and Jim Hirschauer of Fermi National Accelerator Laboratory, the largest U.S. institution collaborating on the LHC.
Catching Collisions in the LHC
Fruguiele, Claudia; Hirschauer, Jim
2018-01-16
Now that the Large Hadron Collider has officially turned back on for its second run, within every proton collision could emerge the next new discovery in particle physics. Learn how the detectors on the Compact Muon Solenoid, or CMS, experiment capture and track particles as they are expelled from a collision. Talking us through these collisions are Claudia Fruguiele and Jim Hirschauer of Fermi National Accelerator Laboratory, the largest U.S. institution collaborating on the LHC.
Exclusion of black hole disaster scenarios at the LHC
NASA Astrophysics Data System (ADS)
Koch, Benjamin; Bleicher, Marcus; Stöcker, Horst
2009-02-01
The upcoming high energy experiments at the LHC are one of the most outstanding efforts for a better understanding of nature. It is associated with great hopes in the physics community. But there is also some fear in the public, that the conjectured production of mini black holes might lead to a dangerous chain reaction. In this Letter we summarize the most straightforward arguments that are necessary to rule out such doomsday scenarios.
Test of the wire ageing induced by radiation for the CMS barrel muon chambers
NASA Astrophysics Data System (ADS)
Conti, E.; Gasparini, F.
2001-06-01
We have carried out laboratory tests to measure the ageing of a wire tube due to pollutants outgassed by various materials. The tested materials are those used in the barrel muon drift tubes of the CMS experiment at LHC. An X-ray gun irradiated the test tube to accelerate the ageing process. No ageing effect has been measured for a period equivalent to 10 years of operation at LHC.
Particle production at RHIC and LHC energies
NASA Astrophysics Data System (ADS)
Tawfik, A.; Gamal, E.; Shalaby, A. G.
2015-07-01
The production of pion, kaon and proton was measured in Pb-Pb collisions at nucleus-nucleus center-of-mass energy sNN = 2.76TeV by the ALICE experiment at Large Hadron Collider (LHC). The particle ratios of these species compared to the RHIC measurements are confronted to the hadron resonance gas (HRG) model and to simulations based on the event generators PYTHIA 6.4.21 and HIJING 1.36. It is found that the homogeneous particle-antiparticle ratios (same species) are fully reproducible by means of HRG and partly by PYTHIA 6.4.21 and HIJING 1.36. The mixed kaon-pion and proton-pion ratios measured at RHIC and LHC energies seem to be reproducible by the HRG model. On the other hand, the strange abundances are underestimated in both event generators. This might be originated to strangeness suppression in the event generators and/or possible strangeness enhancement in the experimental data. It is apparent that the values of kaon-pion ratios are not sensitive to the huge increase of sNN from 200 (RHIC) to 2760 GeV (LHC). We conclude that the ratios of produced particle at LHC seem not depending on the system size.
Electron Lenses for the Large Hadron Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stancari, Giulio; Valishev, Alexander; Bruce, Roderik
Electron lenses are pulsed, magnetically confined electron beams whose current-density profile is shaped to obtain the desired effect on the circulating beam. Electron lenses were used in the Fermilab Tevatron collider for bunch-by-bunch compensation of long-range beam-beam tune shifts, for removal of uncaptured particles in the abort gap, for preliminary experiments on head-on beam-beam compensation, and for the demonstration of halo scraping with hollow electron beams. Electron lenses for beam-beam compensation are being commissioned in RHIC at BNL. Within the US LHC Accelerator Research Program and the European HiLumi LHC Design Study, hollow electron beam collimation was studied as anmore » option to complement the collimation system for the LHC upgrades. This project is moving towards a technical design in 2014, with the goal to build the devices in 2015-2017, after resuming LHC operations and re-assessing needs and requirements at 6.5 TeV. Because of their electric charge and the absence of materials close to the proton beam, electron lenses may also provide an alternative to wires for long-range beam-beam compensation in LHC luminosity upgrade scenarios with small crossing angles.« less
Less-simplified models of dark matter for direct detection and the LHC
NASA Astrophysics Data System (ADS)
Choudhury, Arghya; Kowalska, Kamila; Roszkowski, Leszek; Sessolo, Enrico Maria; Williams, Andrew J.
2016-04-01
We construct models of dark matter with suppressed spin-independent scattering cross section utilizing the existing simplified model framework. Even simple combinations of simplified models can exhibit interference effects that cause the tree level contribution to the scattering cross section to vanish, thus demonstrating that direct detection limits on simplified models are not robust when embedded in a more complicated and realistic framework. In general for fermionic WIMP masses ≳ 10 GeV direct detection limits on the spin-independent scattering cross section are much stronger than those coming from the LHC. However these model combinations, which we call less-simplified models, represent situations where LHC searches become more competitive than direct detection experiments even for moderate dark matter mass. We show that a complementary use of several searches at the LHC can strongly constrain the direct detection blind spots by setting limits on the coupling constants and mediators' mass. We derive the strongest limits for combinations of vector + scalar, vector + "squark", and "squark" + scalar mediator, and present the corresponding projections for the LHC 14 TeV for a number of searches: mono-jet, jets + missing energy, and searches for heavy vector resonances.
High-precision QCD at hadron colliders:electroweak gauge boson rapidity distributions at NNLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasiou, C.
2004-01-05
We compute the rapidity distributions of W and Z bosons produced at the Tevatron and the LHC through next-to-next-to leading order in QCD. Our results demonstrate remarkable stability with respect to variations of the factorization and renormalization scales for all values of rapidity accessible in current and future experiments. These processes are therefore ''gold-plated'': current theoretical knowledge yields QCD predictions accurate to better than one percent. These results strengthen the proposal to use $W$ and $Z$ production to determine parton-parton luminosities and constrain parton distribution functions at the LHC. For example, LHC data should easily be able to distinguish themore » central parton distribution fit obtained by MRST from that obtained by Alekhin.« less
Dark Matter and Super Symmetry: Exploring and Explaining the Universe with Simulations at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutsche, Oliver
The Large Hadron Collider (LHC) at CERN in Geneva, Switzerland, is one of the largest machines on this planet. It is built to smash protons into each other at unprecedented energies to reveal the fundamental constituents of our universe. The 4 detectors at the LHC record multi-petabyte datasets every year. The scientific analysis of this data requires equally large simulation datasets of the collisions based on the theory of particle physics, the Standard Model. The goal is to verify the validity of the Standard Model or of theories that extend the Model like the concepts of Supersymmetry and an explanationmore » of Dark Matter. I will give an overview of the nature of simulations needed to discover new particles like the Higgs boson in 2012, and review the different areas where simulations are indispensable: from the actual recording of the collisions to the extraction of scientific results to the conceptual design of improvements to the LHC and its experiments.« less
On the search for the electric dipole moment of strange and charm baryons at LHC
NASA Astrophysics Data System (ADS)
Botella, F. J.; Garcia Martin, L. M.; Marangotto, D.; Martinez Vidal, F.; Merli, A.; Neri, N.; Oyanguren, A.; Ruiz Vidal, J.
2017-03-01
Permanent electric dipole moments (EDMs) of fundamental particles provide powerful probes for physics beyond the Standard Model. We propose to search for the EDM of strange and charm baryons at LHC, extending the ongoing experimental program on the neutron, muon, atoms, molecules and light nuclei. The EDM of strange Λ baryons, selected from weak decays of charm baryons produced in p p collisions at LHC, can be determined by studying the spin precession in the magnetic field of the detector tracking system. A test of CPT symmetry can be performed by measuring the magnetic dipole moment of Λ and \\overline{Λ} baryons. For short-lived {Λ} ^+c and {Ξ} ^+c baryons, to be produced in a fixed-target experiment using the 7 TeV LHC beam and channeled in a bent crystal, the spin precession is induced by the intense electromagnetic field between crystal atomic planes. The experimental layout based on the LHCb detector and the expected sensitivities in the coming years are discussed.
Probing top-Z dipole moments at the LHC and ILC
Röntsch, Raoul; Schulze, Markus
2015-08-11
We investigate the weak electric and magnetic dipole moments of top quark-Z boson interactions at the Large Hadron Collider (LHC) and the International Linear Collider (ILC). Their vanishingly small magnitude in the Standard Model makes these couplings ideal for probing New Physics interactions and for exploring the role of top quarks in electroweak symmetry breaking. In our analysis, we consider the production of two top quarks in association with a Z boson at the LHC, and top quark pairs mediated by neutral gauge bosons at the ILC. These processes yield direct sensitivity to top quark-Z boson interactions and complement indirectmore » constraints from electroweak precision data. Our computation is accurate to next-to-leading order in QCD, we include the full decay chain of top quarks and the Z boson, and account for theoretical uncertainties in our constraints. Furthermore, we find that LHC experiments will soon be able to probe weak dipole moments for the first time.« less
Precision Timing with Silicon Sensors for Use in Calorimetry
NASA Astrophysics Data System (ADS)
Bornheim, A.; Ronzhin, A.; Kim, H.; Bolla, G.; Pena, C.; Xie, S.; Apresyan, A.; Los, S.; Spiropulu, M.; Ramberg, E.
2017-11-01
The high luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN is expected to provide instantaneous luminosities of 5 × 1034 cm -2 s -1. The high luminosities expected at the HL-LHC will be accompanied by a factor of 5 to 10 more pileup compared with LHC conditions in 2015, causing general confusion for particle identification and event reconstruction. Precision timing allows to extend calorimetric measurements into such a high density environment by subtracting the energy deposits from pileup interactions. Calorimeters employing silicon as the active component have recently become a popular choice for the HL- LHC and future collider experiments which face very high radiation environments. We present studies of basic calorimetric and precision timing measurements using a prototype composed of tungsten absorber and silicon sensor as the active medium. We show that for the bulk of electromagnetic showers induced by electrons in the range of 20 GeV to 30 GeV, we can achieve time resolutions better than 25 ps per single pad sensor.
LHC searches for dark sector showers
NASA Astrophysics Data System (ADS)
Cohen, Timothy; Lisanti, Mariangela; Lou, Hou Keong; Mishra-Sharma, Siddharth
2017-11-01
This paper proposes a new search program for dark sector parton showers at the Large Hadron Collider (LHC). These signatures arise in theories characterized by strong dynamics in a hidden sector, such as Hidden Valley models. A dark parton shower can be composed of both invisible dark matter particles as well as dark sector states that decay to Standard Model particles via a portal. The focus here is on the specific case of `semi-visible jets,' jet-like collider objects where the visible states in the shower are Standard Model hadrons. We present a Simplified Model-like parametrization for the LHC observables and propose targeted search strategies for regions of parameter space that are not covered by existing analyses. Following the `mono- X' literature, the portal is modeled using either an effective field theoretic contact operator approach or with one of two ultraviolet completions; sensitivity projections are provided for all three cases. We additionally highlight that the LHC has a unique advantage over direct detection experiments in the search for this class of dark matter theories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, Morad; et al.
This paper presents combinations of inclusive and differential measurements of the charge asymmetry (more » $$A_{\\mathrm{C}}$$) in top quark pair ($$\\mathrm{t}\\overline{\\mathrm{t}}$$) events with a lepton+jets signature by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at centre-of-mass energies of 7 and 8 TeV corresponding to integrated luminosities of about 5 and 20 fb$$^{-1}$$ for each experiment, respectively. The resulting combined LHC measurements of the inclusive charge asymmetry are $$A_{\\mathrm{C}}^{\\mathrm{LHC7}} = 0.005 \\pm0.007 \\text{ (stat)}\\pm0.006 \\text{ (syst)}$$ at 7 TeV and $$A_{\\mathrm{C}}^{\\mathrm{LHC8}} = 0.0055 \\pm0.0023\\text{ (stat)}\\pm0.0025\\text{ (syst)}$$ at 8 TeV. These values, as well as the combination of $$A_{\\mathrm{C}}$$ measurements as a function of the invariant mass of the $$\\mathrm{t}\\overline{\\mathrm{t}}$$ system at 8 TeV, are consistent with the respective standard model predictions.« less
MCdevelop - a universal framework for Stochastic Simulations
NASA Astrophysics Data System (ADS)
Slawinska, M.; Jadach, S.
2011-03-01
We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.
NASA Astrophysics Data System (ADS)
Shalaev, V.; Gorbunov, I.; Shmatov, S.
2018-04-01
In this paper we review the results of a measurement of the forward-backward asymmetry of oppositely charged lepton pairs produced via Z/γ* boson in pp collisions during the LHC Run 1 at √s = 8 TeV with integrated luminosity 19.1 fb-1 (2012). We also present our preliminary results obtained with Monte Carlo samples at √s = 13 TeV
The Information Loss for QCD Matter in Cylindrical Black Holes at LHC
NASA Astrophysics Data System (ADS)
Ghaffary, Tooraj; Pincak, Richard
2018-03-01
In this paper, the information loss was found for QCD matter in cylindrical black holes at LHC by developing the Gottesman and Preskill approach to cylindrical black holes and determine the information transformation from the collapsing matter to the outgoing Hawking radiation state for gluons and quarks. It is found that for all gluon and quark with finite values of energies, all information from all emission processes experiences some degree of loss.
From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venugopalan, R.
2010-07-22
We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.
The Information Loss for QCD Matter in Cylindrical Black Holes at LHC
NASA Astrophysics Data System (ADS)
Ghaffary, Tooraj; Pincak, Richard
2017-12-01
In this paper, the information loss was found for QCD matter in cylindrical black holes at LHC by developing the Gottesman and Preskill approach to cylindrical black holes and determine the information transformation from the collapsing matter to the outgoing Hawking radiation state for gluons and quarks. It is found that for all gluon and quark with finite values of energies, all information from all emission processes experiences some degree of loss.
Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H.; Benoit, M.; Chen, H.
High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in themore » testbeam of the HV-CMOS sensor AMS180v4 at CERN. Preliminary results have shown that the test system is very versatile. In conclusion, further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.« less
Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade
Liu, H.; Benoit, M.; Chen, H.; ...
2017-01-11
High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in themore » testbeam of the HV-CMOS sensor AMS180v4 at CERN. Preliminary results have shown that the test system is very versatile. In conclusion, further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.« less
Optimizing CMS build infrastructure via Apache Mesos
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...
2015-12-23
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
Optimizing CMS build infrastructure via Apache Mesos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
NASA Astrophysics Data System (ADS)
Ballarino, A.; Giannelli, S.; Jacquemod, A.; Leclercq, Y.; Ortiz Ferrer, C.; Parma, V.
2017-12-01
The High Luminosity LHC (HL-LHC) is a project aiming to upgrade the Large Hadron Collider (LHC) after 2020-2025 in order to increase the integrated luminosity by about one order of magnitude and extend the operational capabilities until 2035. The upgrade of the focusing triplet insertions for the Atlas and CMS experiments foresees using superconducting magnets operating in a pressurised superfluid helium bath at 1.9 K. The increased radiation levels from the particle debris produced by particle collisions in the experiments require that the power converters are placed in radiation shielded zones located in a service gallery adjacent to the main tunnel. The powering of the magnets from the gallery is achieved by means of MgB2 superconducting cables in a 100-m long flexible cryostat transfer line, actively cooled by 4.5 K to 20 K gaseous helium generated close to the magnets. At the highest temperature end, the helium flow cools the High Temperature Superconducting (HTS) current leads before being recovered at room temperature. At the magnet connection side, a dedicated connection box allows connection to the magnets and a controlled boil-off production of helium for the cooling needs of the powering system. This paper presents the overall concept of the cryostat system from the magnet connection boxes, through the flexible cryostat transfer line, to the connection box of the current leads.
WLCG and IPv6 - The HEPiX IPv6 working group
Campana, S.; K. Chadwick; Chen, G.; ...
2014-06-11
The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applyingmore » for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). Furthermore, this paper describes the work done by the working group and its future plans.« less
WLCG and IPv6 - the HEPiX IPv6 working group
NASA Astrophysics Data System (ADS)
Campana, S.; Chadwick, K.; Chen, G.; Chudoba, J.; Clarke, P.; Eliáš, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; Grigoras, C.; Hoeft, B.; Kelsey, D. P.; Kouba, T.; López Muñoz, F.; Martelli, E.; Mitchell, M.; Nairz, A.; Ohrenberg, K.; Pfeiffer, A.; Prelz, F.; Qi, F.; Rand, D.; Reale, M.; Rozsa, S.; Sciaba, A.; Voicu, R.; Walker, C. J.; Wildish, T.
2014-06-01
The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.
Big Data in HEP: A comprehensive use case study
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter; ...
2017-11-23
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity.more » In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. Lastly, we will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.« less
Big Data in HEP: A comprehensive use case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutsche, Oliver; Cremonesi, Matteo; Elmer, Peter
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity.more » In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. Lastly, we will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.« less
Extreme I/O on HPC for HEP using the Burst Buffer at NERSC
NASA Astrophysics Data System (ADS)
Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho
2017-10-01
In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.
Opportunistic Resource Usage in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.
2014-01-01
CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliantmore » cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.« less
Reach of the high-energy LHC for gluinos and top squarks in SUSY models with light Higgsinos
NASA Astrophysics Data System (ADS)
Baer, Howard; Barger, Vernon; Gainer, James S.; Serce, Hasan; Tata, Xerxes
2017-12-01
We examine the top squark (stop) and gluino reach of the proposed 33 TeV energy upgrade of the Large Hadron Collider (LHC33) in the Minimal Supersymmetric Standard Model (MSSM) with light Higgsinos and relatively heavy electroweak gauginos. In our analysis, we assume that stops decay to Higgsinos via t˜1→t Z˜1, t˜1→t Z˜2, and t˜1→b W˜1 with branching fractions in the ratio 1 ∶1 ∶2 (expected if the decay occurs dominantly via the superpotential Yukawa coupling), while gluinos decay via g ˜→t t˜1 or via three-body decays to third-generation quarks plus Higgsinos. These decay patterns are motivated by models of natural supersymmetry where Higgsinos are expected to be close in mass to mZ, but gluinos may be as heavy as 5-6 TeV, and stops may have masses up to ˜3 TeV . We devise cuts to optimize the signals from stop and gluino pair production at LHC33. We find that experiments at LHC33 should be able to discover stops with >5 σ significance if mt˜1<2.3 (2.8) [3.2] TeV for an integrated luminosity of 0.3 (1) [3 ] ab-1 . The corresponding reach for gluinos extends to 5 (5.5) [6] TeV. These results imply that experiments at LHC33 should be able to discover at least one of the stop or gluino pair signals even with an integrated luminosity of 0.3 ab-1 for natural supersymmetry models with no worse than 3% electroweak fine-tuning and quite likely both gluinos and stops for an integrated luminosity of 3 ab-1 .
ActiWiz 3 – an overview of the latest developments and their application
NASA Astrophysics Data System (ADS)
Vincke, H.; Theis, C.
2018-06-01
In 2011 the ActiWiz code was developed at CERN in order to optimize the choice of materials for accelerator equipment from a radiological point of view. Since then the code has been extended to allow for calculating complete nuclide inventories and provide evaluations with respect to radiotoxicity, inhalation doses, etc. Until now the software included only pre-defined radiation environments for CERN’s high-energy proton accelerators which were based on FLUKA Monte Carlo calculations. Eventually the decision was taken to invest into a major revamping of the code. Starting with version 3 the software is not limited anymore to pre-defined radiation fields but within a few seconds it can also treat arbitrary environments of which fluence spectra are available. This has become possible due to the use of ~100 CPU years’ worth of FLUKA Monte Carlo simulations as well as the JEFF cross-section library for neutrons < 20 MeV. Eventually the latest code version allowed for the efficient inclusion of 42 additional radiation environments of the LHC experiments as well as considerably more flexibility in view of characterizing also waste from CERN’s Large Electron Positron collider (LEP). New fully integrated analysis functionalities like automatic evaluation of difficult-to-measure nuclides, rapid assessment of the temporal evolution of quantities like radiotoxicity or dose-rates, etc. make the software a powerful tool for characterization complementary to general purpose MC codes like FLUKA. In this paper an overview of the capabilities will be given using recent examples from the domain of waste characterization as well as operational radiation protection.
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...
2017-10-01
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
NASA Astrophysics Data System (ADS)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea
2017-10-01
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.
This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less
A new experiment-independent mechanism to persistify and serve the detector geometry of ATLAS
NASA Astrophysics Data System (ADS)
Bianchi, Riccardo Maria; Boudreau, Joseph; Vukotic, Ilija
2017-10-01
The complex geometry of the whole detector of the ATLAS experiment at LHC is currently stored only in custom online databases, from which it is built on-the-fly on request. Accessing the online geometry guarantees accessing the latest version of the detector description, but requires the setup of the full ATLAS software framework “Athena”, which provides the online services and the tools to retrieve the data from the database. This operation is cumbersome and slows down the applications that need to access the geometry. Moreover, all applications that need to access the detector geometry need to be built and run on the same platform as the ATLAS framework, preventing the usage of the actual detector geometry in stand-alone applications. Here we propose a new mechanism to persistify (in software development in general, and in HEP computing in particular, persistifying means taking an object which lives in memory only - for example because it was built on-the-fly while processing the experimental data, - serializing it and storing it on disk as a persistent object) and serve the geometry of HEP experiments. The new mechanism is composed by a new file format and the modules to make use of it. The new file format allows to store the whole detector description locally in a file, and it is especially optimized to describe large complex detectors with the minimum file size, making use of shared instances and storing compressed representations of geometry transformations. Then, the detector description can be read back in, to fully restore the in-memory geometry tree. Moreover, a dedicated REST API is being designed and developed to serve the geometry in standard exchange formats like JSON, to let users and applications download specific partial geometry information. With this new geometry persistification a new generation of applications could be developed, which can use the actual detector geometry while being platform-independent and experiment-independent.
Assembly Tests of the First Nb 3 Sn Low-Beta Quadrupole Short Model for the Hi-Lumi LHC
Pan, H.; Felice, H.; Cheng, D. W.; ...
2016-01-18
In preparation for the high-luminosity upgrade of the Large Hadron Collider (LHC), the LHC Accelerator Research Program (LARP) in collaboration with CERN is pursuing the development of MQXF: a 150-mm-aperture high-field Nb3Sn quadrupole magnet. Moreover, the development phase starts with the fabrication and test of several short models (1.2-m magnetic length) and will continue with the development of several long prototypes. All of them are mechanically supported using a shell-based support structure, which has been extensively demonstrated on several R&D models within LARP. The first short model MQXFS-AT has been assembled at LBNL with coils fabricated by LARP and CERN.more » In our paper, we summarize the assembly process and show how it relies strongly on experience acquired during the LARP 120-mm-aperture HQ magnet series. We also present comparison between strain gauges data and finite-element model analysis. Finally, we present the implication of the MQXFS-AT experience on the design of the long prototype support structure.« less
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.
2015-12-01
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
Elastic Extension of a CMS Computing Centre Resources on External Clouds
NASA Astrophysics Data System (ADS)
Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.
2016-10-01
After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.
EDITORIAL: Metrological Aspects of Accelerator Technology and High Energy Physics Experiments
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.; Pozniak, Krzysztof T.
2007-08-01
The subject of this special feature in Measurement Science and Technology concerns measurement methods, devices and subsystems, both hardware and software aspects, applied in large experiments of high energy physics (HEP) and superconducting RF accelerator technology (SRF). These experiments concern mainly the physics of elementary particles or the building of new machines and detectors. The papers present practical examples of applied solutions in large, contemporary, international research projects such as HERA, LHC, FLASH, XFEL, ILC and others. These machines are unique in their global scale and consist of extremely dedicated apparatus. The apparatus is characterized by very large dimensions, a considerable use of resources and a high level of overall technical complexity. They possess a large number of measurement channels (ranging from thousands to over 100 million), are characterized by fast of processing of measured data and high measurement accuracies, and work in quite adverse environments. The measurement channels cooperate with a large number of different sensors of momenta, energies, trajectories of elementary particles, electron, proton and photon beam profiles, accelerating fields in resonant cavities, and many others. The provision of high quality measurement systems requires the designers to use only the most up-to-date technical solutions, measurement technologies, components and devices. Research work in these demanding fields is a natural birthplace of new measurement methods, new data processing and acquisition algorithms, complex, networked measurement system diagnostics and monitoring. These developments are taking place in both hardware and software layers. The chief intention of this special feature is that the papers represent equally some of the most current metrology research problems in HEP and SRF. The accepted papers have been divided into four topical groups: superconducting cavities (4 papers), low level RF systems (8 papers), ionizing radiation (5 papers) and HEP experiments (8 papers). The editors would like to thank cordially all the authors who accepted our invitation to present their very recent results. A number of authors of the papers in this issue are active in the 6th European Framework Research Program CARE—Coordinated Accelerators Research in Europe and ELAN—the European Linear Accelerator Network. Some authors are active in research programs of a global extent such as the LHC, ILC and GDE—the Global Design Effort for the International Linear Collider. We also would like to thank personally, as well as on behalf of all the authors, the Editorial Board of Measurement Science and Technology for accepting this very exciting field of contemporary metrology. This field seems to be really a birthplace of a host of new metrological technologies, where the driving force is the incredibly high technical requirements that must soon be fulfilled if we dream of building new accelerators for elementary particles, new biological materials and medicine alike. Special thanks are due to Professor R S Jachowicz of Warsaw University of Technology for initiating this issue and for continuous support and advice during our work.
Dark Higgs bosons at the ForwArd Search ExpeRiment
NASA Astrophysics Data System (ADS)
Feng, Jonathan L.; Galon, Iftah; Kling, Felix; Trojanowski, Sebastian
2018-03-01
FASER, ForwArd Search ExpeRiment at the LHC, has been proposed as a small, very far forward detector to discover new, light, weakly-coupled particles. Previous work showed that with a total volume of just ˜0.1 - 1 m3 , FASER can discover dark photons in a large swath of currently unconstrained parameter space, extending the discovery reach of the LHC program. Here we explore FASER's discovery prospects for dark Higgs bosons. These scalar particles are an interesting foil for dark photons, as they probe a different renormalizable portal interaction and are produced dominantly through B and K meson decays, rather than pion decays, leading to less collimated signals. Nevertheless, we find that FASER is also a highly sensitive probe of dark Higgs bosons with significant discovery prospects that are comparable to, and complementary to, much larger proposed experiments.
High Energy Physics Research with the CMS Experiment at CERN - Energy Frontier Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, Gail G.
The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) near Geneva, Switzerland, is now the highest energy accelerator in the world, colliding protons with protons. On July 4, 2012, the two general-purpose experiments, ATLAS and the Compact Muon Solenoid (CMS) experiment, announced the observation of a particle consistent with the world’s most sought-after particle, the Higgs boson, at a mass of about 125 GeV (approximately 125 times the mass of the proton). The Higgs boson is the final missing ingredient of the standard model, in which it is needed to allow most other particles to acquiremore » mass through the mechanism of electroweak symmetry breaking. We are members of the team in the CMS experiment that found evidence for the Higgs boson through its decay to two photons, the most sensitive channel at the LHC. We are proposing to carry out studies to determine whether the new particle has the properties expected for the standard model Higgs boson or whether it is something else. The new particle can still carry out its role in electroweak symmetry breaking but have other properties as well. Most theorists think that a single standard model Higgs boson cannot be the complete solution – there are other particles needed to answer some of the remaining questions, such as the hierarchy problem. The particle that has been observed could be one of several Higgs bosons, for example, or it could be composite. One model of physics beyond the standard model is supersymmetry, in which every ordinary particle has a superpartner with opposite spin properties. In supersymmetric models, there must be at least five Higgs bosons. In the most popular versions of supersymmetry, the lightest supersymmetric particle does not decay and is a candidate for dark matter. This proposal covers the period from June 1, 2013, to March 31, 2016. During this period the LHC will finally reach its design energy, almost twice the energy at which it now runs. We will be able to study the Higgs boson at the current LHC energy using about three times as much data as were used to make the observation. In 2013 the LHC will shut down to make preparations to run at its design energy in 2015. During the shutdown period, we will be preparing upgrades of the detector to be able to run at the higher rates of proton-proton collisions that will also be possible once the LHC is running at design energy. The upgrade on which we are working, the inner silicon pixel tracker, will be installed in late 2016. Definitive tests of whether the new particle satisfies the properties of the standard model Higgs boson will almost certainly require both the higher energy and the larger amounts of data that can be accumulated using the higher rates. Meanwhile we will use the data taken during 2012 and the higher energy data starting in 2015 to continue to search for beyond-the-standard-model physics such as supersymmetry and heavy neutrinos. We have already made such searches using data since the LHC started running. We are discussing with theorists how a 125-GeV Higgs modifies such models. Finding such particles will probably also require the higher energy and larger amounts of data beginning in 2015. The period of this proposal promises to be very exciting, leading to new knowledge of the matter in the Universe.« less
LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre
NASA Astrophysics Data System (ADS)
Pérez Calle, Elio
2011-12-01
The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobyshev, A.; DeMar, P.; Grigaliunas, V.
The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Ourmore » analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization« less
Scrutinizing the alignment limit in two-Higgs-doublet models. II. mH=125 GeV
NASA Astrophysics Data System (ADS)
Bernon, Jérémy; Gunion, John F.; Haber, Howard E.; Jiang, Yun; Kraml, Sabine
2016-02-01
In the alignment limit of a multidoublet Higgs sector, one of the Higgs mass eigenstates aligns in field space with the direction of the scalar field vacuum expectation values, and its couplings approach those of the Standard Model (SM) Higgs boson. We consider C P -conserving two-Higgs-doublet models (2HDMs) of type I and type II near the alignment limit in which the heavier of the two C P -even Higgs bosons, H , is the SM-like state observed with a mass of 125 GeV, and the couplings of H to gauge bosons approach those of the SM. We review the theoretical structure and analyze the phenomenological implications of this particular realization of the alignment limit, where decoupling of the extra states cannot occur given that the lighter C P -even state h must, by definition, have a mass below 125 GeV. For the numerical analysis, we perform scans of the 2HDM parameter space employing the software packages 2hdmc and lilith, taking into account all relevant pre-LHC constraints, constraints from the measurements of the 125 GeV Higgs signal at the LHC, as well as the most recent limits coming from searches for other Higgs-like states. Implications for Run 2 at the LHC, including expectations for observing the other scalar states, are also discussed.
Streamlining CASTOR to manage the LHC data torrent
NASA Astrophysics Data System (ADS)
Lo Presti, G.; Espinal Curull, X.; Cano, E.; Fiorini, B.; Ieri, A.; Murray, S.; Ponce, S.; Sindrilaru, E.
2014-06-01
This contribution describes the evolution of the main CERN storage system, CASTOR, as it manages the bulk data stream of the LHC and other CERN experiments, achieving over 90 PB of stored data by the end of LHC Run 1. This evolution was marked by the introduction of policies to optimize the tape sub-system throughput, going towards a cold storage system where data placement is managed by the experiments' production managers. More efficient tape migrations and recalls have been implemented and deployed where bulk meta-data operations greatly reduce the overhead due to small files. A repack facility is now integrated in the system and it has been enhanced in order to automate the repacking of several tens of petabytes, required in 2014 in order to prepare for the next LHC run. Finally the scheduling system has been evolved to integrate the internal monitoring. To efficiently manage the service a solid monitoring infrastructure is required, able to analyze the logs produced by the different components (about 1 kHz of log messages). A new system has been developed and deployed, which uses a transport messaging layer provided by the CERN-IT Agile Infrastructure and exploits technologies including Hadoop and HBase. This enables efficient data mining by making use of MapReduce techniques, and real-time data aggregation and visualization. The outlook for the future is also presented. Directions and possible evolution will be discussed in view of the restart of data taking activities.
Flavour physics and the Large Hadron Collider beauty experiment.
Gibson, Valerie
2012-02-28
An exciting new era in flavour physics has just begun with the start of the Large Hadron Collider (LHC). The LHCb (where b stands for beauty) experiment, designed specifically to search for new phenomena in quantum loop processes and to provide a deeper understanding of matter-antimatter asymmetries at the most fundamental level, is producing many new and exciting results. It gives me great pleasure to describe a selected few of the results here-in particular, the search for rare B(0)(s)-->μ+ μ- decays and the measurement of the B(0)(s) charge-conjugation parity-violating phase, both of which offer high potential for the discovery of new physics at and beyond the LHC energy frontier in the very near future.
The ATLAS Diamond Beam Monitor: Luminosity detector at the LHC
NASA Astrophysics Data System (ADS)
Schaefer, D. M.; ATLAS Collaboration
2016-07-01
After the first three years of the LHC running, the ATLAS experiment extracted its pixel detector system to refurbish and re-position the optical readout drivers and install a new barrel layer of pixels. The experiment has also taken advantage of this access to install a set of beam monitoring telescopes with pixel sensors, four each in the forward and backward regions. These telescopes are based on chemical vapor deposited (CVD) diamond sensors to survive in this high radiation environment without needing extensive cooling. This paper describes the lessons learned in construction and commissioning of the ATLAS Diamond Beam Monitor (DBM). We show results from the construction quality assurance tests and commissioning performance, including results from cosmic ray running in early 2015.
Answering Gauguinâs Questions: Where Are We Coming From, Where Are We Going, and What Are We?
Ellis, John [CERN
2017-12-09
The knowledge of matter revealed by the current reigning theory of particle physics, the so-called Standard Model, still leaves open many basic questions. What is the origin of the matter in the Universe? How does its mass originate? What is the nature of the dark matter that fills the Universe? Are there additional dimensions of space? The Large Hadron Collider (LHC) at the CERN Laboratory in Geneva, Switzerland, where high-energy experiments have now started, will take physics into a new realm of energy and time, and will address these physics analogues of Gauguin's questions. The answers will set the stage for possible future experiments beyond the scope of the LHC.
NASA Astrophysics Data System (ADS)
Petukhov, A. M.; Soldatov, E. Yu
2017-12-01
Separation of electroweak component from strong component of associated Zγ production on hadron colliders is a very challenging task due to identical final states of such processes. The only difference is the origin of two leading jets in these two processes. Rectangular cuts on jet kinematic variables from ATLAS/CMS 8 TeV Zγ experimental analyses were improved using machine learning techniques. New selection variables were also tested. The expected significance of separation for LHC experiments conditions at the second datataking period (Run2) and 120 fb-1 amount of data reaches more than 5σ. Future experimental observation of electroweak Zγ production can also lead to the observation physics beyond Standard Model.
Small-strip Thin Gap Chambers for the muon spectrometer upgrade of the ATLAS experiment
NASA Astrophysics Data System (ADS)
Perez Codina, E.; ATLAS Muon Collaboration
2016-07-01
The ATLAS muon system upgrade to be installed during the LHC long shutdown in 2018/19, the so-called New Small Wheel (NSW), is designed to cope with the increased instantaneous luminosity in LHC Run 3. The small-strip Thin Gap Chambers (sTGC) will provide the NSW with a fast trigger and high precision tracking. The construction protocol has been validated by test beam experiments on a full-size prototype sTGC detector, showing the performance requirements are met. The intrinsic spatial resolution for a single layer has been found to be about 45 μm for a perpendicular incident angle, and the transition region between pads has been measured to be about 4 mm.
NASA Astrophysics Data System (ADS)
Barbier, G.; Cadoux, F.; Clark, A.; Endo, M.; Favre, Y.; Ferrere, D.; Gonzalez-Sevilla, S.; Hanagaki, K.; Hara, K.; Iacobucci, G.; Ikegami, Y.; Jinnouchi, O.; La Marra, D.; Nakamura, K.; Nishimura, R.; Perrin, E.; Seez, W.; Takubo, Y.; Takashima, R.; Terada, S.; Todome, K.; Unno, Y.; Weber, M.
2014-04-01
It is expected that after several years of data-taking, the Large Hadron Collider (LHC) physics programme will be extended to the so-called High-Luminosity LHC, where the instantaneous luminosity will be increased up to 5 × 1034 cm-2 s-1. For the general-purpose ATLAS experiment at the LHC, a complete replacement of its internal tracking detector will be necessary, as the existing detector will not provide the required performance due to the cumulated radiation damage and the increase in the detector occupancy. The baseline layout for the new ATLAS tracker is an all-silicon-based detector, with pixel sensors in the inner layers and silicon micro-strip detectors at intermediate and outer radii. The super-module (SM) is an integration concept proposed for the barrel strip region of the future ATLAS tracker, where double-sided stereo silicon micro-strip modules (DSM) are assembled into a low-mass local support (LS) structure. Mechanical aspects of the proposed LS structure are described.
The performance of the CASTOR calorimeter during LHC Run 2
NASA Astrophysics Data System (ADS)
van de Klundert, Merijn H. F.; CMS Collaboration
2017-11-01
CASTOR is an electromagnetic and hadronic tungsten-quartz sampling Cerenkov calorimeter located at the Compact Muon Solenoid experiment at the Large Hadron Collider. The detector has pseudorapidity borders at -5.2 and -6.6. An overview is presented on the various aspects of CASTOR’s performance and their relations during LHC Run 2. The equalisation of CASTOR’s channels is performed using beam-halo muons. Thereafter, CASTOR’s pedestal spectrum is studied. It is shown that noise estimates which are extracted using a fit, give on average a 10% lower threshold than statistical estimates. Gain correction factors, which are needed for the intercalibration, are obtained using a statistical, in-situ applicable method. The results of this method are shown to be reasonably consistent with laboratory measurements. Penultimately the absolute calibration is discussed, with emphasis on the relation between the scale uncertainty and CASTOR’s alignment. It is shown that the alignment’s contribution to the systematic uncertainty is decreased by over 50% in LHC Run 2 w.r.t. LHC Run 1. Finally generalisations of the conclusions to other subsystems and future improvements are discussed.
First experiences with the LHC BLM sanity checks
NASA Astrophysics Data System (ADS)
Emery, J.; Dehning, B.; Effinger, E.; Nordt, A.; Sapinski, M. G.; Zamantzas, C.
2010-12-01
The reliability concerns have driven the design of the Large Hardron Collider (LHC) Beam Loss Monitoring (BLM) system from the early stage of the studies up to the present commissioning and the latest development of diagnostic tools. To protect the system against non-conformities, new ways of automatic checking have been developed and implemented. These checks are regularly and systematically executed by the LHC operation team to ensure that the system status is after each test "as good as new". The sanity checks are part of this strategy. They are testing the electrical part of the detectors (ionisation chamber or secondary emission detector), their cable connections to the front-end electronics, further connections to the back-end electronics and their ability to request a beam abort. During the installation and in the early commissioning phase, these checks have shown their ability to find also non-conformities caused by unexpected failure event scenarios. In every day operation, a non-conformity discovered by this check inhibits any further injections into the LHC until the check confirms the absence of non-conformities.
Test beam studies of silicon timing for use in calorimetry
Apresyan, A.; Bolla, G.; Bornheim, A.; ...
2016-04-12
The high luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN is expected to provide instantaneous luminosities of 5 X 10 34 cm –2 s –1. The high luminosities expected at the HL-LHC will be accompanied by a factor of 5 to 10 more pileup compared with LHC conditions in 2015, causing general confusion for particle identification and event reconstruction. Precision timing allows to extend calorimetric measurements into such a high density environment by subtracting the energy deposits from pileup interactions. Calorimeters employing silicon as the active component have recently become a popular choice for the HL-LHC and futuremore » collider experiments which face very high radiation environments. In this article, we present studies of basic calorimetric and precision timing measurements using a prototype composed of tungsten absorber and silicon sensor as the active medium. Lastly, we show that for the bulk of electromagnetic showers induced by electrons in the range of 20 GeV to 30 GeV, we can achieve time resolutions better than 25 ps per single pad sensor.« less
Precision Timing with Silicon Sensors for Use in Calorimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornheim, A.; Ronzhin, A.; Kim, H.
2017-11-27
The high luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN is expected to provide instantaneous luminosities of 5 × 10 34 cm -2 s -1. The high luminosities expected at the HL-LHC will be accompanied by a factor of 5 to 10 more pileup compared with LHC conditions in 2015, causing general confusion for particle identification and event reconstruction. Precision timing allows to extend calorimetric measurements into such a high density environment by subtracting the energy deposits from pileup interactions. Calorimeters employing silicon as the active component have recently become a popular choice for the HL- LHC andmore » future collider experiments which face very high radiation environments. We present studies of basic calorimetric and precision timing measurements using a prototype composed of tungsten absorber and silicon sensor as the active medium. We show that for the bulk of electromagnetic showers induced by electrons in the range of 20 GeV to 30 GeV, we can achieve time resolutions better than 25 ps per single pad sensor.« less
CMS Distributed Computing Integration in the LHC sustained operations era
NASA Astrophysics Data System (ADS)
Grandi, C.; Bockelman, B.; Bonacorsi, D.; Fisk, I.; González Caballero, I.; Farina, F.; Hernández, J. M.; Padhi, S.; Sarkar, S.; Sciabà, A.; Sfiligoi, I.; Spiga, F.; Úbeda García, M.; Van Der Ster, D. C.; Zvada, M.
2011-12-01
After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.
Open issues in hadronic interactions for air showers
NASA Astrophysics Data System (ADS)
Pierog, Tanguy
2017-06-01
In detailed air shower simulations, the uncertainty in the prediction of shower observables for different primary particles and energies is currently dominated by differences between hadronic interaction models. With the results of the first run of the LHC, the difference between post-LHC model predictions has been reduced to the same level as experimental uncertainties of cosmic ray experiments. At the same time new types of air shower observables, like the muon production depth, have been measured, adding new constraints on hadronic models. Currently no model is able to consistently reproduce all mass composition measurements possible within the Pierre Auger Observatory for instance. Comparing the different models, and with LHC and cosmic ray data, we will show that the remaining open issues in hadronic interactions in air shower development are now in the pion-air interactions and in nuclear effects.
NASA Astrophysics Data System (ADS)
Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar
2018-04-01
We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.
Description and performance of track and primary-vertex reconstruction with the CMS tracker
Chatrchyan, Serguei
2014-10-16
A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For tbar t events under typical 2011 pileup conditions, the average track-reconstruction efficiency for promptly-produced charged particles with transverse momenta of p T > 0.9GeV is 94% for pseudorapidities of |η| < 0.9 and 85% for 0.9 < |η| < 2.5.more » The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p T = 100GeV emitted at |η| < 1.4, the resolutions are approximately 2.8% in p T, and respectively, 10μm and 30μm in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10–12μm in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.« less
The long journey to the Higgs boson and beyond at the LHC: Emphasis on CMS
NASA Astrophysics Data System (ADS)
Virdee, Tejinder Singh
2016-11-01
Since 2010 there has been a rich harvest of results on standard model physics by the ATLAS and CMS experiments operating on the Large Hadron Collider. In the summer of 2012, a spectacular discovery was made by these experiments of a new, heavy particle. All the subsequently analysed data point strongly to the properties of this particle as those expected for the Higgs boson associated with the Brout-Englert-Higgs mechanism postulated to explain the spontaneous symmetry breaking in the electroweak sector, thereby explaining how elementary particles acquire mass. This article focuses on the CMS experiment, the technological challenges encountered in its construction, describing some of the physics results obtained so far, including the discovery of the Higgs boson, and searches for the widely anticipated new physics beyond the standard model, and peer into the future involving the high-luminosity phase of the LHC. This article is complementary to the one by Peter Jenni4 that focuses on the ATLAS experiment.
Searching for new physics with three-particle correlations in pp collisions at the LHC
NASA Astrophysics Data System (ADS)
Sanchis-Lozano, Miguel-Angel; Sarkisyan-Grinbaum, Edward K.
2018-06-01
New phenomena involving pseudorapidity and azimuthal correlations among final-state particles in pp collisions at the LHC can hint at the existence of hidden sectors beyond the Standard Model. In this paper we rely on a correlated-cluster picture of multiparticle production, which was shown to account for the ridge effect, to assess the effect of a hidden sector on three-particle correlations concluding that there is a potential signature of new physics that can be directly tested by experiments using well-known techniques.
Pixel sensors with slim edges and small pitches for the CMS upgrades for HL-LHC
Vernieri, Caterina; Bolla, Gino; Rivera, Ryan; ...
2016-06-07
Here, planar n-in-n silicon detectors with small pitches and slim edges are being investigated for the innermost layers of tracking devices for the foreseen upgrades of the LHC experiments. Sensor prototypes compatible with the CMS readout, fabricated by Sintef, were tested in the laboratory and with a 120 GeV/c proton beam at the Fermilab test beam facility before and after irradiation with up to 2 × 10 15 neq/cm 2 fluence. Preliminary results of the data analysis are presented.
Real time analysis with the upgraded LHCb trigger in Run III
NASA Astrophysics Data System (ADS)
Szumlak, Tomasz
2017-10-01
The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1.1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1.1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real-time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.
Detecting a heavy neutrino electric dipole moment at the LHC
NASA Astrophysics Data System (ADS)
Sher, Marc; Stevens, Justin R.
2018-02-01
The milliQan Collaboration has proposed to search for millicharged particles by looking for very weakly ionizing tracks in a detector installed in a cavern near the CMS experiment at the LHC. We note that another form of exotica can also yield weakly ionizing tracks. If a heavy neutrino has an electric dipole moment (EDM), then the milliQan experiment may be sensitive to it as well. In particular, writing the general dimension-5 operator for an EDM with a scale of a TeV and a one-loop factor, one finds a potential EDM as high as a few times 10-17 e-cm, and models exist where it is an order of magnitude higher. Redoing the Bethe calculation of ionization energy loss for an EDM, it is found that the milliQan detector is sensitive to EDMs as small as 10-17 e-cm. Using the production cross-section and analyzing the acceptance of the milliQan detector, we find the expected 95% exclusion and 3σ sensitivity over the range of neutrino masses from 5-1000 GeV for integrated luminosities of 300 and 3000 fb-1 at the LHC.
Beam feasibility study of a collimator with in-jaw beam position monitors
NASA Astrophysics Data System (ADS)
Wollmann, Daniel; Nosych, Andriy A.; Valentino, Gianluca; Aberle, Oliver; Aßmann, Ralph W.; Bertarelli, Alessandro; Boccard, Christian; Bruce, Roderik; Burkart, Florian; Calvo, Eva; Cauchi, Marija; Dallocchio, Alessandro; Deboy, Daniel; Gasior, Marek; Jones, Rhodri; Kain, Verena; Lari, Luisella; Redaelli, Stefano; Rossi, Adriana
2014-12-01
At present, the beam-based alignment of the LHC collimators is performed by touching the beam halo with both jaws of each collimator. This method requires dedicated fills at low intensities that are done infrequently and makes this procedure time consuming. This limits the operational flexibility, in particular in the case of changes of optics and orbit configuration in the experimental regions. The performance of the LHC collimation system relies on the machine reproducibility and regular loss maps to validate the settings of the collimator jaws. To overcome these limitations and to allow a continuous monitoring of the beam position at the collimators, a design with jaw-integrated Beam Position Monitors (BPMs) was proposed and successfully tested with a prototype (mock-up) collimator in the CERN SPS. Extensive beam experiments allowed to determine the achievable accuracy of the jaw alignment for single and multi-turn operation. In this paper, the results of these experiments are discussed. The non-linear response of the BPMs is compared to the predictions from electromagnetic simulations. Finally, the measured alignment accuracy is compared to the one achieved with the present collimators in the LHC.
NASA Astrophysics Data System (ADS)
Acharya, B.; Alexandre, J.; Bendtz, K.; Benes, P.; Bernabéu, J.; Campbell, M.; Cecchini, S.; Chwastowski, J.; Chatterjee, A.; de Montigny, M.; Derendarz, D.; De Roeck, A.; Ellis, J. R.; Fairbairn, M.; Felea, D.; Frank, M.; Frekers, D.; Garcia, C.; Giacomelli, G.; Hasegan, D.; Kalliokoski, M.; Katre, A.; Kim, D.-W.; King, M. G. L.; Kinoshita, K.; Lacarrère, D. H.; Lee, S. C.; Leroy, C.; Lionti, A.; Margiotta, A.; Mauri, N.; Mavromatos, N. E.; Mermod, P.; Milstead, D.; Mitsou, V. A.; Orava, R.; Parker, B.; Pasqualini, L.; Patrizii, L.; Păvălas, G. E.; Pinfold, J. L.; Platkevič, M.; Popa, V.; Pozzato, M.; Pospisil, S.; Rajantie, A.; Sahnoun, Z.; Sakellariadou, M.; Sarkar, S.; Semenoff, G.; Sirri, G.; Sliwa, K.; Soluk, R.; Spurio, M.; Srivastava, Y. N.; Staszewski, R.; Suk, M.; Swain, J.; Tenti, M.; Togo, V.; Trzebinski, M.; Tuszynski, J. A.; Vento, V.; Vives, O.; Vykydal, Z.; Whyntie, T.; Widom, A.; Willems, G.; Yoon, J. H.
2016-08-01
The MoEDAL experiment is designed to search for magnetic monopoles and other highly-ionising particles produced in high-energy collisions at the LHC. The largely passive MoEDAL detector, deployed at Interaction Point 8 on the LHC ring, relies on two dedicated direct detection techniques. The first technique is based on stacks of nucleartrack detectors with surface area ~18m2, sensitive to particle ionisation exceeding a high threshold. These detectors are analysed offline by optical scanning microscopes. The second technique is based on the trapping of charged particles in an array of roughly 800 kg of aluminium samples. These samples are monitored offline for the presence of trapped magnetic charge at a remote superconducting magnetometer facility. We present here the results of a search for magnetic monopoles using a 160 kg prototype MoEDAL trapping detector exposed to 8TeV proton-proton collisions at the LHC, for an integrated luminosity of 0.75 fb-1. No magnetic charge exceeding 0:5 g D (where g D is the Dirac magnetic charge) is measured in any of the exposed samples, allowing limits to be placed on monopole production in the mass range 100 GeV≤ m ≤ 3500 GeV. Model-independent cross-section limits are presented in fiducial regions of monopole energy and direction for 1 g D ≤ | g| ≤ 6 g D, and model-dependent cross-section limits are obtained for Drell-Yan pair production of spin-1/2 and spin-0 monopoles for 1 g D ≤ | g| ≤ 4 g D. Under the assumption of Drell-Yan cross sections, mass limits are derived for | g| = 2 g D and | g| = 3 g D for the first time at the LHC, surpassing the results from previous collider experiments.
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
The First Moment of Azimuthal Anisotropy in Nuclear Collisions from AGS to LHC Energies
Singha, Subhash; Shanmuganathan, Prashanth; Keane, Declan
2016-10-01
We reviewmore » topics related to the first moment of azimuthal anisotropy ( v 1 ), commonly known as directed flow, focusing on both charged particles and identified particles from heavy-ion collisions. Beam energies from the highest available, at the CERN LHC, down to projectile kinetic energies per nucleon of a few GeV per nucleon, as studied in experiments at the Brookhaven AGS, fall within our scope. We focus on experimental measurements and on theoretical work where direct comparisons with experiment have been emphasized. The physics addressed or potentially addressed by this review topic includes the study of Quark Gluon Plasma and, more generally, investigation of the Quantum Chromodynamics phase diagram and the equation of state describing the accessible phases.« less
The LHCb software and computing upgrade for Run 3: opportunities and challenges
NASA Astrophysics Data System (ADS)
Bozzi, C.; Roiser, S.; LHCb Collaboration
2017-10-01
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.
Tesla: An application for real-time data analysis in High Energy Physics
NASA Astrophysics Data System (ADS)
Aaij, R.; Amato, S.; Anderlini, L.; Benson, S.; Cattaneo, M.; Clemencic, M.; Couturier, B.; Frank, M.; Gligorov, V. V.; Head, T.; Jones, C.; Komarov, I.; Lupton, O.; Matev, R.; Raven, G.; Sciascia, B.; Skwarnicki, T.; Spradlin, P.; Stahl, S.; Storaci, B.; Vesterinen, M.
2016-11-01
Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.
Results from the First Beam-Induced Reconstructed Tracks in the LHCb Vertex Locator
NASA Astrophysics Data System (ADS)
Rodrigues, E.
2010-04-01
LHCb is a dedicated experiment at the LHC to study CP violation and rare b decays. The vertex locator (VELO) is a silicon strip detector designed to measure precisely the production and decay vertices of B-mesons. The detector is positioned at 8 mm of the LHC beams and will operate in an extremely harsh radiation environment. The VELO consists of two retractable detector halves with 21 silicon micro-strip tracking modules each. A module is composed of two n+-on-n 300 μm thick half disc sensors with R and Φ micro-strip geometry. The detectors are operated in vacuum and a bi-phase CO2 cooling system is used. The full system has been operated since June 2008 and its commissioning experience will be reported. During the LHC synchronization tests in August and September 2008, and June 2009 the LHCb detectors measured secondary particles produced by the interaction of the LHC primary beam on a beam dump. About 50,000 tracks were reconstructed in the VELO and they were used to derive the relative timing alignment between the sensors and for the first evaluation of the spatial alignment. Using this track sample the VELO has been aligned to an accuracy of 5 μm. A single hit resolution of 10 μm was obtained at the smallest pitch for tracks of perpendicular incidence. The design and the main components of the detector system are introduced. The commissioning of the detector is reported and the talk will focus on the results obtained using the first beam-induced reconstructed tracks.
Multiboson interactions at the LHC
Green, D. R.; Meade, P.; Pleier, M. -A.
2017-09-20
This paper covers results on the production of all possible electroweak boson pairs and 2-to-1 vector boson fusion at the CERN Large Hadron Collider (LHC) in proton-proton collisions at a center of mass energy of 7 and 8 TeV. The data were taken between 2010 and 2012. Limits on anomalous triple gauge couplings (aTGCs) then follow. In addition, data on electroweak triple gauge boson production and 2-to-2 vector boson scattering yield limits on anomalous quartic gauge boson couplings (aQGCs). The LHC hosts two general purpose experiments, ATLAS and CMS, which have both reported limits on aTGCs and aQGCs which aremore » herein summarized. Finally, the interpretation of these limits in terms of an effective field theory is reviewed, and recommendations are made for testing other types of new physics using multigauge boson production.« less
5-year operation experience with the 1.8 K refrigeration units of the LHC cryogenic system
NASA Astrophysics Data System (ADS)
Ferlin, G.; Tavian, L.; Claudet, S.; Pezzetti, M.
2015-12-01
Since 2009, the Large Hadron Collider (LHC) is in operation at CERN. The LHC superconducting magnets distributed over eight sectors of 3.3-km long are cooled at 1.9 K in pressurized superfluid helium. The nominal operating temperature of 1.9 K is produced by eight 1.8-K refrigeration units based on centrifugal cold compressors (3 or 4 stages depending to the vendor) combined with warm volumetric screw compressors with sub-atmospheric suction. After about 5 years of continuous operation, we will present the results concerning the availability for the final user of these refrigeration units and the impact of the design choice on the recovery time after a system trip. We will also present the individual results for each rotating machinery in terms of failure origin and of Mean Time between Failure (MTBF), as well as the consolidations and upgrades applied to these refrigeration units.
A flippon related singlet at the LHC II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tianjun; Maxin, James A.; Mayes, Van E.
2016-06-28
Here, we consider the 750 GeV diphoton resonance at the 13 TeV LHC in the ℱ-SU(5) model with a Standard Model (SM) singlet field which couples to TeV-scale vector-like particles, dubbed flippons. This singlet field assumes the role of the 750 GeV resonance, with production via gluon fusion and subsequent decay to a diphoton via the vector-like particle loops. We present a numerical analysis showing that the observed 8 TeV and 13 TeV diphoton production cross-sections can be generated in the model space with realistic electric charges and Yukawa couplings for light vector-like masses. We further discuss the experimental viabilitymore » of light vector-like masses in a General No-Scale ℱ-SU(5) model, offering a few benchmark scenarios in this consistent GUT that can satisfy all experimental constraints imposed by the LHC and other essential experiments.« less
Transverse momentum distributions of baryons at LHC energies
NASA Astrophysics Data System (ADS)
Bylinkin, A. A.; Piskounova, O. I.
2016-04-01
Transverse momentum spectra of protons and anti-protons from RHIC (√{ s} = 62 and 200 GeV) and LHC experiments (√{ s} = 0.9 and 7 TeV) have been considered. The data are fitted in the low pT region with the universal formula that includes the value of exponent slope as main parameter. It is seen that the slope of low pT distributions is changing with energy. This effect impacts on the energy dependence of average transverse momenta, which behaves approximately as s0.06 that is similar to the previously observed behavior of Λ-baryon spectra. In addition, the available data on Λc production from LHCb at √{ s} = 7 TeV were also studied. The estimated average
Multi-Boson Interactions at the Run 1 LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Daniel R.; Meade, Patrick; Pleier, Marc-Andre
2016-10-24
This review article covers results on the production of all possible electroweak boson pairs and 2-to-1 vector boson fusion (VBF) at the CERN Large Hadron Collider (LHC) in proton-proton collisions at a center-of-mass energy of 7 TeV and 8 TeV. The data was taken between 2010 and 2012. Limits on anomalous triple gauge couplings (aTGCs) then follow. In addition, data on electroweak triple gauge boson production and 2-to-2 vector boson scattering (VBS) yield limits on anomalous quartic gauge boson couplings (aQGCs). The LHC hosts two general purpose experiments, ATLAS and CMS, which both have reported limits on aTGCs and aQGCsmore » which are herein summarized. The interpretation of these limits in terms of an effective field theory (EFT) is reviewed, and recommendations are made for testing other types of new physics using multi-gauge boson production.« less
W.K.H. Panofsky Prize: The Long Journey to the Higgs Boson: ATLAS
NASA Astrophysics Data System (ADS)
Jenni, Peter
2017-01-01
The discovery of the Higgs boson announced in July 2012 by ATLAS and CMS was a culminating point for a very long journey in the realization of the LHC project. Building up the experimental programme with this unique high-energy collider, and developing the very sophisticated detectors built and operated by world-wide collaborations, meant a fabulous scientific and human adventure spanning more than three decades. This talk will recall the initial motivation for the project, tracing its history, as well as illustrate some of the many milestones that finally led to the rich harvest of physics so far. The talk will focus on the ATLAS experiment, including also new, very recent results from the ongoing 13 TeV Run-2 of LHC. And this is only the beginning of this fantastic journey into unchartered physics territory with the LHC.
NASA Astrophysics Data System (ADS)
Romano, Annalisa; Boine-Frankenheim, Oliver; Buffat, Xavier; Iadarola, Giovanni; Rumolo, Giovanni
2018-06-01
At the beginning of the 2016 run, an anomalous beam instability was systematically observed at the CERN Large Hadron Collider (LHC). Its main characteristic was that it spontaneously appeared after beams had been stored for several hours in collision at 6.5 TeV to provide data for the experiments, despite large chromaticity values and high strength of the Landau-damping octupole magnet. The instability exhibited several features characteristic of those induced by the electron cloud (EC). Indeed, when LHC operates with 25 ns bunch spacing, an EC builds up in a large fraction of the beam chambers, as revealed by several independent indicators. Numerical simulations have been carried out in order to investigate the role of the EC in the observed instabilities. It has been found that the beam intensity decay is unfavorable for the beam stability when LHC operates in a strong EC regime.
Particle identification with the ALICE Time-Of-Flight detector at the LHC
NASA Astrophysics Data System (ADS)
Alici, A.
2014-12-01
High performance Particle Identification system (PID) is a distinguishing characteristic of the ALICE experiment at the CERN Large Hadron Collider (LHC). Charged particles in the intermediate momentum range are identified in ALICE by the Time-Of-Flight (TOF) detector. The TOF exploits the Multi-gap Resistive Plate Chamber (MRPC) technology, capable of an intrinsic time resolution at the level of few tens of ps with an overall efficiency close to 100% and a large operation plateau. The full system is made of 1593 MRPC chambers with a total area of 141 m2, covering the pseudorapidity interval [-0.9,+0.9] and the full azimuthal angle. The ALICE TOF system has shown very stable operation during the first 3 years of collisions at the LHC. In this paper a summary of the system performance as well as main results with data from collisions will be reported.
HGCAL: A High-Granularity Calorimeter for the Endcaps of CMS at HL-LHC
NASA Astrophysics Data System (ADS)
Ochando, Christophe; CMS Collaboration
2017-11-01
Calorimetry at the High Luminosity LHC (HL-LHC) faces two enormous challenges, particularly in the forward direction: radiation tolerance and unprecedented in-time event pileup. To meet these challenges, the CMS experiment has decided to construct a High Granularity Calorimeter (HGCAL), featuring a previously unrealized transverse and longitudinal segmentation, for both electromagnetic and hadronic compartments. This will facilitate particle-flow-type calorimetry, where the fine structure of showers can be measured and used to enhance particle identification, energy resolution and pileup rejection. The majority of the HGCAL will be based on robust and cost-effective hexagonal silicon sensors with about 1cm2 or 0.5cm2 hexagonal cell size, with the final 5 interaction lengths of the hadronic compartment being based on highly segmented plastic scintillator with on-scintillator SiPM readout. We present an overview of the HGCAL project, including the motivation, engineering design, readout concept and simulated performance.
LHCb experience with LFC replication
NASA Astrophysics Data System (ADS)
Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.
2008-07-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
HGCAL: a High-Granularity Calorimeter for the endcaps of CMS at HL-LHC
NASA Astrophysics Data System (ADS)
Magnan, A.-M.
2017-01-01
Calorimetry at the High Luminosity LHC (HL-LHC) faces two enormous challenges, particularly in the forward direction: radiation tolerance and unprecedented in-time event pileup. To meet these challenges, the CMS experiment has decided to construct a High Granularity Calorimeter (HGCAL), featuring a previously unrealized transverse and longitudinal segmentation, for both electromagnetic and hadronic compartments. This will facilitate particle-flow-type calorimetry, where the fine structure of showers can be measured and used to enhance particle identification, energy resolution and pileup rejection. The majority of the HGCAL will be based on robust and cost-effective hexagonal silicon sensors with simeq 1 cm2 or 0.5 cm2 hexagonal cell size, with the final five interaction lengths of the hadronic compartment being based on highly segmented plastic scintillator with on-scintillator SiPM readout. We present an overview of the HGCAL project, including the motivation, engineering design, readout/trigger concept and simulated performance.
Learning with the ATLAS Experiment at CERN
ERIC Educational Resources Information Center
Barnett, R. M.; Johansson, K. E.; Kourkoumelis, C.; Long, L.; Pequenao, J.; Reimers, C.; Watkins, P.
2012-01-01
With the start of the LHC, the new particle collider at CERN, the ATLAS experiment is also providing high-energy particle collisions for educational purposes. Several education projects--education scenarios--have been developed and tested on students and teachers in several European countries within the Learning with ATLAS@CERN project. These…
The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments
NASA Astrophysics Data System (ADS)
Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.
2012-12-01
After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.
NASA Astrophysics Data System (ADS)
Cavallari, Francesca
2015-09-01
The seminar presents an introduction to calorimetry in particle physics. Initially the purpose of electromagnetic and hadronic calorimeters in particle physics is shown. Then the paper focusses on electromagnetic calorimeters and it describes the microscopic phenomena that drive the formation of electromagnetic showers. Homogeneous and sampling calorimeters are presented and the energy resolution of both is analyzed. A few examples of past and present electromagnetic calorimeters at particle colliders are presented, with particular attention to the ones employed in the Atlas and CMS experiments at the LHC, their design constraints, challenges and adopted choices. Both these calorimeters were designed to operate for a minimum of ten years at the LHC, with an instantaneous luminosity of 1· 1034/cm2/s and for an integrated luminosity of 500/fb. From 2023 a new program will start: the high luminosity LHC (HL-LHC), which is expected to provide an instantaneous luminosity of around 5· 1034/cm2/s and integrate a total luminosity of around 3000/fb in ten years of data taking. The evolution of the CMS and Atlas calorimeters is assessed and needed upgrades are presented.
Conceptual design of the cryostat for the new high luminosity (HL-LHC) triplet magnets
NASA Astrophysics Data System (ADS)
Ramos, D.; Parma, V.; Moretti, M.; Eymin, C.; Todesco, E.; Van Weelderen, R.; Prin, H.; Berkowitz Zamora, D.
2017-12-01
The High Luminosity LHC (HL-LHC) is a project to upgrade the LHC collider after 2020-2025 to increase the integrated luminosity by about one order of magnitude and extend the physics production until 2035. An upgrade of the focusing triplets insertion system for the ATLAS and CMS experiments is foreseen using superconducting magnets operating in a pressurised superfluid helium bath at 1.9 K. This will require the design and construction of four continuous cryostats, each about sixty meters in length and one meter in diameter, for the final beam focusing quadrupoles, corrector magnets and beam separation dipoles. The design is constrained by the dimensions of the existing tunnel and accessibility restrictions imposing the integration of cryogenic piping inside the cryostat, thus resulting in a very compact integration. As the alignment and position stability of the magnets is crucial for the luminosity performance of the machine, the magnet support system must be carefully designed in order to cope with parasitic forces and thermo-mechanical load cycles. In this paper, we present the conceptual design of the cryostat and discuss the approach to address the stringent and often conflicting requirements of alignment, integration and thermal aspects.
Huang, T.; No, J. M.; Pernié, L.; ...
2017-08-11
Here, we analyze the prospects for resonant di-Higgs production searches at the LHC in themore » $$b\\bar{b}$$W +W − (W +→ℓ +ν ℓ, W −→ℓ −$$\\bar{v}$$ ℓ) channel, as a probe of the nature of the electroweak phase transition in Higgs portal extensions of the Standard Model. In order to maximize the sensitivity in this final state, we develop a new algorithm for the reconstruction of the $$b\\bar{b}$$W +W − invariant mass in the presence of neutrinos from the W decays, building from a technique developed for the reconstruction of resonances decaying to τ +τ − pairs. We show that resonant di-Higgs production in the $$b\\bar{b}$$W +W − channel could be a competitive probe of the electroweak phase transition already with the data sets to be collected by the CMS and ATLAS experiments in run 2 of the LHC. The increase in sensitivity with larger amounts of data accumulated during the high-luminosity LHC phase can be sufficient to enable a potential discovery of the resonant di-Higgs production in this channel.« less
Jet Substructure at the Large Hadron Collider : Experimental Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asquith, Lily; Campanelli, Mario; Delitzsch, Chris
Jet substructure has emerged to play a central role at the Large Hadron Collider (LHC), where it has provided numerous innovative new ways to search for new physics and to probe the Standard Model, particularly in extreme regions of phase space. In this article we focus on a review of the development and use of state-of-the-art jet substructure techniques by the ATLAS and CMS experiments. ALICE and LHCb have been probing fragmentation functions since the start of the LHC and have also recently started studying other jet substructure techniques. It is likely that in the near future all LHC collaborationsmore » will make significant use of jet substructure and grooming techniques. Much of the work in this field in recent years has been galvanized by the Boost Workshop Series, which continues to inspire fruitful collaborations between experimentalists and theorists. We hope that this review will prove a useful introduction and reference to experimental aspects of jet substructure at the LHC. A companion overview of recent progress in theory and machine learning approaches is given in 1709.04464, the complete review will be submitted to Reviews of Modern Physics.« less
Grid site availability evaluation and monitoring at CMS
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, T.; No, J. M.; Pernié, L.
Here, we analyze the prospects for resonant di-Higgs production searches at the LHC in themore » $$b\\bar{b}$$W +W − (W +→ℓ +ν ℓ, W −→ℓ −$$\\bar{v}$$ ℓ) channel, as a probe of the nature of the electroweak phase transition in Higgs portal extensions of the Standard Model. In order to maximize the sensitivity in this final state, we develop a new algorithm for the reconstruction of the $$b\\bar{b}$$W +W − invariant mass in the presence of neutrinos from the W decays, building from a technique developed for the reconstruction of resonances decaying to τ +τ − pairs. We show that resonant di-Higgs production in the $$b\\bar{b}$$W +W − channel could be a competitive probe of the electroweak phase transition already with the data sets to be collected by the CMS and ATLAS experiments in run 2 of the LHC. The increase in sensitivity with larger amounts of data accumulated during the high-luminosity LHC phase can be sufficient to enable a potential discovery of the resonant di-Higgs production in this channel.« less
Grid site availability evaluation and monitoring at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
NASA Astrophysics Data System (ADS)
Garion, C.; Dufay-Chanat, L.; Koettig, T.; Machiocha, W.; Morrone, M.
2015-12-01
The High Luminosity LHC project (HL-LHC) aims at increasing the luminosity (rate of collisions) in the Large Hadron Collider (LHC) experiments by a factor of 10 beyond the original design value (from 300 to 3000 fb-1). It relies on new superconducting magnets, installed close to the interaction points, equipped with new beam screen. This component has to ensure the vacuum performance together with shielding the cold mass from physics debris and screening the cold bore cryogenic system from beam induced heating. The beam screen operates in the range 40-60 K whereas the magnet cold bore temperature is 1.9 K. A tungsten-based material is used to absorb the energy of particles. In this paper, measurements of the mechanical and physical properties of such tungsten material are shown at room and cryogenic temperature. In addition, the design and the thermal mechanical behaviour of the beam screen assembly are presented also. They include the heat transfer from the tungsten absorbers to the cooling pipes and the supporting system that has to minimise the heat inleak into the cold mass. The behaviour during a magnet quench is also presented.
Beam Loss Monitoring for LHC Machine Protection
NASA Astrophysics Data System (ADS)
Holzer, Eva Barbara; Dehning, Bernd; Effnger, Ewald; Emery, Jonathan; Grishin, Viatcheslav; Hajdu, Csaba; Jackson, Stephen; Kurfuerst, Christoph; Marsili, Aurelien; Misiowiec, Marek; Nagel, Markus; Busto, Eduardo Nebot Del; Nordt, Annika; Roderick, Chris; Sapinski, Mariusz; Zamantzas, Christos
The energy stored in the nominal LHC beams is two times 362 MJ, 100 times the energy of the Tevatron. As little as 1 mJ/cm3 deposited energy quenches a magnet at 7 TeV and 1 J/cm3 causes magnet damage. The beam dumps are the only places to safely dispose of this beam. One of the key systems for machine protection is the beam loss monitoring (BLM) system. About 3600 ionization chambers are installed at likely or critical loss locations around the LHC ring. The losses are integrated in 12 time intervals ranging from 40 μs to 84 s and compared to threshold values defined in 32 energy ranges. A beam abort is requested when potentially dangerous losses are detected or when any of the numerous internal system validation tests fails. In addition, loss data are used for machine set-up and operational verifications. The collimation system for example uses the loss data for set-up and regular performance verification. Commissioning and operational experience of the BLM are presented: The machine protection functionality of the BLM system has been fully reliable; the LHC availability has not been compromised by false beam aborts.
Grid site availability evaluation and monitoring at CMS
NASA Astrophysics Data System (ADS)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.
Exploiting volatile opportunistic computing resources with Lobster
NASA Astrophysics Data System (ADS)
Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2015-12-01
Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.
ATLAS fast physics monitoring: TADA
NASA Astrophysics Data System (ADS)
Sabato, G.; Elsing, M.; Gumpert, C.; Kamioka, S.; Moyse, E.; Nairz, A.; Eifert, T.; ATLAS Collaboration
2017-10-01
The ATLAS experiment at the LHC has been recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data. TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0. The system can monitor a large range of physics channels, offline data quality and physics performance quantities. TADA output is available on a website accessible by the whole collaboration. It gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combined performance groups. The note reports as well about the technical aspects of TADA: the software structure to obtain the input TAG files, the framework workflow and structure, the webpage and its implementation.
The CMS tracker control system
NASA Astrophysics Data System (ADS)
Dierlamm, A.; Dirkes, G. H.; Fahrer, M.; Frey, M.; Hartmann, F.; Masetti, L.; Militaru, O.; Shah, S. Y.; Stringer, R.; Tsirou, A.
2008-07-01
The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 104 power supply parameters, about 103 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 105 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention.
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
Intrusion Prevention and Detection in Grid Computing - The ALICE Case
NASA Astrophysics Data System (ADS)
Gomez, Andres; Lara, Camilo; Kebschull, Udo
2015-12-01
Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.
Orthos, an alarm system for the ALICE DAQ operations
NASA Astrophysics Data System (ADS)
Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy
2012-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.
Modeling the Effects of Mirror Misalignment in a Ring Imaging Cherenkov Detector
NASA Astrophysics Data System (ADS)
Hitchcock, Tawanda; Harton, Austin; Garcia, Edmundo
2012-03-01
The Very High Momentum Particle Identification Detector (VHMPID) has been proposed for the ALICE experiment at the Large Hadron Collider (LHC). This detector upgrade is considered necessary to study jet-matter interaction at high energies. The VHMPID identifies charged hadrons in the 5 GeV/c to 25 GeV/c momentum range. The Cherenkov photons emitted in the VHMPID radiator are collected by spherical mirrors and focused onto a photo-detector plane forming a ring image. The radius of this ring is related to the Cherenkov angle, this information coupled with the particle momentum allows the particle identification. A major issue in the RICH detector is that environmental conditions can cause movements in mirror position. In addition, chromatic dispersion causes the refractive index to shift, altering the Cherenkov angle. We are modeling a twelve mirror RICH detector taking into account the effects of mirror misalignment and chromatic dispersion using a commercial optical software package. This will include quantifying the effects of both rotational and translational mirror misalignment for the initial assembly of the module and later on particle identification.
Study of short-lived resonances with the ALICE Experiment at the LHC
NASA Astrophysics Data System (ADS)
Karasu Uysal, Ayben
2012-02-01
The study of short-lived resonances allows the investigation of the collision dynamics and of the properties of the hot and dense medium created in high energy collisions. Moreover it is interesting to address the topics of the strangeness production by the analysis of strange resonances. First measurements of the phi(1020), Λ *(1520), K*(892), Ξ *(1530) and doubly charged Δ(1232) resonances in pp collisions at a center of mass energy of 7 TeV with the ALICE apparatus at the LHC are presented. Thermal model predictions of particle ratios in proton-proton collisions are shown.
Measurements of jet-related observables at the LHC
NASA Astrophysics Data System (ADS)
Kokkas, P.
2015-11-01
During the first years of the LHC operation a large amount of jet data was recorded by the ATLAS and CMS experiments. In this review several measurements of jet-related observables are presented, such as multi-jet rates and cross sections, ratios of jet cross sections, jet shapes and event shape observables. All results presented here are based on jet data collected at a centre-of-mass energy of 7 TeV. Data are compared to various Monte Carlo generators, as well as to theoretical next-to-leading-order calculations allowing a test of perturbative Quantum Chromodynamics in a previously unexplored energy region.
QCD Physics with the CMS Experiment
NASA Astrophysics Data System (ADS)
Cerci, S.
2017-12-01
Jets which are the signatures of quarks and gluons in the detector can be described by Quantum Chromodynamics (QCD) in terms of parton-parton scattering. Jets are abundantly produced at the LHC's high energy scales. Measurements of inclusive jets, dijets and multijets can be used to test perturbative QCD predictions and to constrain parton distribution functions (PDF), as well as to measure the strong coupling constant αS . The measurements use the samples of proton-proton collisions collected with the CMS detector at the LHC at various center-of-mass energies of 7, 8 and 13 TeV.
NASA Astrophysics Data System (ADS)
Acharya, S.; Adamová, D.; Adolfsson, J.; Aggarwal, M. M.; AglieriRinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, N.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Alba, J. L. B.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; AlfaroMolina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altenkamper, L.; Altsybeev, I.; Alves GarciaPrado, C.; An, M.; Andrei, C.; Andreou, D.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Boca, G.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonomi, G.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cerello, P.; Chandra, S.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Chowdhury, T.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Concas, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Costanza, S.; Crkovská, J.; Crochet, P.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; De Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Di Ruzza, B.; Diakonov, I.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Doremalen, L. V. V.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; FernándezTéllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Garg, P.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Gay Ducati, M. B.; Germain, M.; Ghosh, J.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; GomézCoral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grion, N.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hasan, H.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; HerreraCorral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hills, C.; Hippolyte, B.; Hladky, J.; Hohlweger, B.; Horak, D.; Hornung, S.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Iga Buitron, S. A.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Isakov, V.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jachołkowski, A.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jaelani, S.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jeric, M.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Ketzer, B.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, D.; Kim, D. W.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Konyushikhin, M.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lai, Y. S.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lazaridis, L.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lim, B.; Lindal, S.; Lindenstruth, V.; Lindsay, S. W.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Ljunggren, H. M.; Llope, W. J.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martinez, J. A. L.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Masson, E.; Mastroserio, A.; Mathis, A. M.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mihaylov, D.; Mihaylov, D. L.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Mohisin Khan, M.; Montes, E.; Moreira De Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Myrcha, J. W.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Narayan, A.; Naru, M. U.; Natal daLuz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao De Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Nobuhiro, A.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Parmar, S.; Passfeld, A.; Pathak, S. P.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira Da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pliquett, F.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Rokita, P. S.; Ronchetti, F.; Rosnet, P.; Rossi, A.; Rotondi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rueda, O. V.; Rui, R.; Russo, R.; Rustamov, A.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Saha, S. K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Scheid, H. S.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schuchmann, S.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shahoyan, R.; Shaikh, W.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thakur, S.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Tropp, L.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wenzel, S. C.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Willsher, E.; Windelband, B.; Witt, W. E.; Yalcin, S.; Yamakawa, K.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zmeskal, J.; Zou, S.
2017-12-01
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (- 3.4<η <5.0) for pp collisions at √{s}= 0.9, 7, and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector and the Forward Multiplicity Detector of ALICE, extending the pseudorapidity coverage of the earlier publications and the high-multiplicity reach. The measurements are compared to results from the CMS experiment and to PYTHIA, PHOJET and EPOS LHC event generators, as well as IP-Glasma calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savina, M. V., E-mail: savina@cern.ch
2015-06-15
A survey of the results of the Compact Muon Solenoid (CMS) experiment that concern searches for massive Kaluza-Klein graviton excitations and microscopic black holes, quantum black holes, and string balls within models of low-energy multidimensional gravity is presented on behalf of the CMS Collaboration. The analysis in question is performed on the basis of a complete sample of data accumulated for proton-proton collisions at the c.m. energies of 7 and 8 TeV at the Large Hadron Collider (LHC) over the period spanning 2010 and 2012.
Probing New Physics with Jets at the LHC
Harris, Robert
2017-12-09
The Large Hadron Collider at CERN has the potential to make a major discovery as early as 2008 from simple measurements of events with two high energy jets. This talk will present the jet trigger and analysis plans of the CMS collaboration, which were produced at the LHC Physics Center at Fermilab. Plans to search the two jet channel for generic signals of new particles and forces will be discussed. I will present the anticipated sensitivity of the CMS experiment to a variety of models of new physics, including quark compositeness, technicolor, superstrings, extra dimensions and grand unification.
Numerical Analysis of Parasitic Crossing Compensation with Wires in DA$$\\Phi$$NE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valishev, A.; Shatilov, D.; Milardi, C.
2015-06-24
Current-bearing wire compensators were successfully used in the 2005-2006 run of the DAΦNE collider to mitigate the detrimental effects of parasitic beam-beam interactions. A marked improvement of the positron beam lifetime was observed in machine operation with the KLOE detector. In view of the possible application of wire beam-beam compensators for the High Luminosity LHC upgrade, we revisit the DAΦNE experiments. We use an improved model of the accelerator with the goal to validate the modern simulation tools and provide valuable input for the LHC upgrade project.
NASA Astrophysics Data System (ADS)
McKee, Shawn; Kissel, Ezra; Meekhof, Benjeman; Swany, Martin; Miller, Charles; Gregorowicz, Michael
2017-10-01
We report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The projects goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to read, write, manage and share data directly from their own computing locations. The NSF CC*DNI DIBBS program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges. While OSiRIS will eventually be serving a broad range of science domains, its first adopter will be the LHC ATLAS detector project via the ATLAS Great Lakes Tier-2 (AGLT2) jointly located at the University of Michigan and Michigan State University. Part of our presentation will cover how ATLAS is using the OSiRIS infrastructure and our experiences integrating our first user community. The presentation will also review the motivations for and goals of the project, the technical details of the OSiRIS infrastructure, the challenges in providing such an infrastructure, and the technical choices made to address those challenges. We will conclude with our plans for the remaining 4 years of the project and our vision for what we hope to deliver by the projects end.
Arnold, Jeffrey
2018-05-14
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
File-based data flow in the CMS Filter Farm
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.
Search in 8 TeV proton-proton collisions with the MoEDAL monopole-trapping test array
NASA Astrophysics Data System (ADS)
Pinfold, J.; Soluk, R.; Lacarrère, D.; Katre, A.; Mermod, P.; Bendtz, K.; Milstead, D.
2014-06-01
The magnetic monopole appears in theories of spontaneous gauge symmetry breaking and its existence would explain the quantisation of electric charge. MoEDAL is the latest approved LHC experiment, designed to search directly for monopoles produced in high-energy collisions. It has now taken data for the first time. The MoEDAL detectors are based on two complementary techniques: nuclear-track detectors are sensitive to the high-ionisation signature expected from a monopole, and the magnetic monopole trapper (MMT) relies on the stopping and trapping of monopoles inside an aluminium array which is then analysed with a superconducting magnetometer. The first results obtained with the MoEDAL MMT test array deployed in 2012 are presented. This experiment probes monopoles carrying a multiple of the fundamental unit magnetic charge for the first time at the LHC.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Heavy quark energy loss in high multiplicity proton-proton collisions at the LHC.
Vogel, Sascha; Gossiaux, Pol Bernard; Werner, Klaus; Aichelin, Jörg
2011-07-15
One of the most promising probes to study deconfined matter created in high energy nuclear collisions is the energy loss of (heavy) quarks. It has been shown in experiments at the Relativistic Heavy Ion Collider that even charm and bottom quarks, despite their high mass, experience a remarkable medium suppression in the quark gluon plasma. In this exploratory investigation we study the energy loss of heavy quarks in high multiplicity proton-proton collisions at LHC energies. Although the colliding systems are smaller than compared to those at the Relativistic Heavy Ion Collider (p+p vs Au+Au), the higher energy might lead to multiplicities comparable to Cu+Cu collisions at the Relativistic Heavy Ion Collider. The interaction of charm quarks with this environment gives rise to a non-negligible suppression of high momentum heavy quarks in elementary collisions.
Operational Experience with the Frontier System in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter
2012-06-20
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less
Operational Experience with the Frontier System in CMS
NASA Astrophysics Data System (ADS)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen
2012-12-01
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
Status and commissioning of the CMS experiment
NASA Astrophysics Data System (ADS)
Wulz, C.-E.
2008-05-01
The construction status of the CMS experiment at the Large Hadron Collider and strategies for commissioning the subdetectors, the magnet, the trigger and the data acquisition are described. The first operations of CMS as a unified system, using either cosmic rays or test data, and the planned activities until the startup of the LHC are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balazs, Csaba; Conrad, Jan; Farmer, Ben
Imaging atmospheric Cherenkov telescopes (IACTs) that are sensitive to potential γ-ray signals from dark matter (DM) annihilation above ~50 GeV will soon be superseded by the Cherenkov Telescope Array (CTA). CTA will have a point source sensitivity an order of magnitude better than currently operating IACTs and will cover a broad energy range between 20 GeV and 300 TeV. Using effective field theory and simplified models to calculate γ-ray spectra resulting from DM annihilation, we compare the prospects to constrain such models with CTA observations of the Galactic center with current and near-future measurements at the Large Hadron Collider (LHC)more » and direct detection experiments. Here, for DM annihilations via vector or pseudoscalar couplings, CTA observations will be able to probe DM models out of reach of the LHC, and, if DM is coupled to standard fermions by a pseudoscalar particle, beyond the limits of current direct detection experiments.« less
Balazs, Csaba; Conrad, Jan; Farmer, Ben; ...
2017-10-04
Imaging atmospheric Cherenkov telescopes (IACTs) that are sensitive to potential γ-ray signals from dark matter (DM) annihilation above ~50 GeV will soon be superseded by the Cherenkov Telescope Array (CTA). CTA will have a point source sensitivity an order of magnitude better than currently operating IACTs and will cover a broad energy range between 20 GeV and 300 TeV. Using effective field theory and simplified models to calculate γ-ray spectra resulting from DM annihilation, we compare the prospects to constrain such models with CTA observations of the Galactic center with current and near-future measurements at the Large Hadron Collider (LHC)more » and direct detection experiments. Here, for DM annihilations via vector or pseudoscalar couplings, CTA observations will be able to probe DM models out of reach of the LHC, and, if DM is coupled to standard fermions by a pseudoscalar particle, beyond the limits of current direct detection experiments.« less
NASA Astrophysics Data System (ADS)
Borg, M.; Bertarelli, A.; Carra, F.; Gradassi, P.; Guardia-Valenzuela, J.; Guinchard, M.; Izquierdo, G. Arnau; Mollicone, P.; Sacristan-de-Frutos, O.; Sammut, N.
2018-03-01
The CERN Large Hadron Collider is currently being upgraded to operate at a stored beam energy of 680 MJ through the High Luminosity upgrade. The LHC performance is dependent on the functionality of beam collimation systems, essential for safe beam cleaning and machine protection. A dedicated beam experiment at the CERN High Radiation to Materials facility is created under the HRMT-23 experimental campaign. This experiment investigates the behavior of three collimation jaws having novel composite absorbers made of copper diamond, molybdenum carbide graphite, and carbon fiber carbon, experiencing accidental scenarios involving the direct beam impact on the material. Material characterization is imperative for the design, execution, and analysis of such experiments. This paper presents new data and analysis of the thermostructural characteristics of some of the absorber materials commissioned within CERN facilities. In turn, characterized elastic properties are optimized through the development and implementation of a mixed numerical-experimental optimization technique.
Intelligent operations of the data acquisition system of the ATLAS experiment at LHC
NASA Astrophysics Data System (ADS)
Anders, G.; Avolio, G.; Lehmann Miotto, G.; Magnoni, L.
2015-05-01
The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data obtained at unprecedented energy and rates. The Run Control (RC) system is the component steering the data acquisition by starting and stopping processes and by carrying all data-taking elements through well-defined states in a coherent way. Taking into account all the lessons learnt during LHC's Run 1, the RC has been completely re-designed and re-implemented during the LHC Long Shutdown 1 (LS1) phase. As a result of the new design, the RC is assisted by the Central Hint and Information Processor (CHIP) service that can be truly considered its “brain”. CHIP is an intelligent system able to supervise the ATLAS data taking, take operational decisions and handle abnormal conditions. In this paper, the design, implementation and performances of the RC/CHIP system will be described. A particular emphasis will be put on the way the RC and CHIP cooperate and on the huge benefits brought by the Complex Event Processing engine. Additionally, some error recovery scenarios will be analysed for which the intervention of human experts is now rendered unnecessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Kirtiman; Homi Bhabha National Institute, Mumbai; Jana, Sudip
We consider the collider phenomenology of a simple extension of the Standard Model (SM), which consists of an EW isospinmore » $3/2$ scalar, $$\\Delta$$ and a pair of EW isospin $1$ vector like fermions, $$\\Sigma$$ and $$\\bar{\\Sigma}$$, responsible for generating tiny neutrino mass via the effective dimension seven operator. This scalar quadruplet with hypercharge Y = 3 has a plethora of implications at the collider experiments. Its signatures at TeV scale colliders are expected to be seen, if the quadruplet masses are not too far above the electroweak symmetry breaking scale. In this article, we study the phenomenology of multi-charged quadruplet scalars. In particular, we study the multi-lepton signatures at the Large Hadron Collider (LHC) experiment, arising from the production and decays of triply and doubly charged scalars. We studied Drell-Yan (DY) pair production as well as pair production of the charged scalars via photon-photon fusion. For doubly and triply charged scalars, photon fusion contributes significantly for large scalar masses. We also studied LHC constraints on the masses of doubly charged scalars in this model. We derive a lower mass limit of 725 GeV on doubly charged quadruplet scalar.« less
NASA Astrophysics Data System (ADS)
Tahir, N. A.; Lomonosov, I. V.; Shutov, A.; Udrea, S.; Deutsch, C.; Fortov, V. E.; Gryaznov, V.; Hoffmann, D. H. H.; Jacobi, J.; Kain, V.; Kuster, M.; Ni, P.; Piriz, A. R.; Schmidt, R.; Spiller, P.; Varentsov, D.; Zioutas, K.
2006-04-01
Detailed theoretical studies have shown that intense heavy-ion beams that will be generated at the future Facility for Antiprotons and Ion Research (FAIR) (Henning 2004 Nucl. Instrum. Methods B 214 211) at Darmstadt will be a very efficient tool to create high-energy-density (HED) states in matter including strongly coupled plasmas. In this paper we show, with the help of two-dimensional numerical simulations, the interesting physical states that can be achieved considering different beam intensities using zinc as a test material. Another very interesting experiment that can be performed using the intense heavy-ion beam at FAIR will be generation of low-entropy compression of a test material such as hydrogen that is enclosed in a cylindrical shell of a high-Z material such as lead or gold. In such an experiment, one can study the problem of hydrogen metallization and the interiors of giant planets. Moreover, we discuss an interesting method to diagnose the HED matter that is at the centre of the Sun. We have also carried out simulations to study the damage caused by the full impact of the Large Hadron Collider (LHC) beam on a superconducting magnet. An interesting outcome of this study is that the LHC beam can induce HED states in matter.
Ghosh, Kirtiman; Homi Bhabha National Institute, Mumbai; Jana, Sudip; ...
2018-03-29
We consider the collider phenomenology of a simple extension of the Standard Model (SM), which consists of an EW isospinmore » $3/2$ scalar, $$\\Delta$$ and a pair of EW isospin $1$ vector like fermions, $$\\Sigma$$ and $$\\bar{\\Sigma}$$, responsible for generating tiny neutrino mass via the effective dimension seven operator. This scalar quadruplet with hypercharge Y = 3 has a plethora of implications at the collider experiments. Its signatures at TeV scale colliders are expected to be seen, if the quadruplet masses are not too far above the electroweak symmetry breaking scale. In this article, we study the phenomenology of multi-charged quadruplet scalars. In particular, we study the multi-lepton signatures at the Large Hadron Collider (LHC) experiment, arising from the production and decays of triply and doubly charged scalars. We studied Drell-Yan (DY) pair production as well as pair production of the charged scalars via photon-photon fusion. For doubly and triply charged scalars, photon fusion contributes significantly for large scalar masses. We also studied LHC constraints on the masses of doubly charged scalars in this model. We derive a lower mass limit of 725 GeV on doubly charged quadruplet scalar.« less
Storage element performance optimization for CMS analysis jobs
NASA Astrophysics Data System (ADS)
Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.
2012-12-01
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance is good.
NASA Astrophysics Data System (ADS)
Burkart, F.; Schmidt, R.; Raginel, V.; Wollmann, D.; Tahir, N. A.; Shutov, A.; Piriz, A. R.
2015-08-01
In a previous paper [Schmidt et al., Phys. Plasmas 21, 080701 (2014)], we presented the first results on beam-matter interaction experiments that were carried out at the High Radiation Materials test facility at CERN. In these experiments, extended cylindrical targets of solid copper were irradiated with beam of 440 GeV protons delivered by the Super Proton Synchrotron (SPS). The beam comprised of a large number of high intensity proton bunches, each bunch having a length of 0.5 ns with a 50 ns gap between two neighboring bunches, while the length of this entire bunch train was about 7 μs. These experiments established the existence of the hydrodynamic tunneling phenomenon the first time. Detailed numerical simulations of these experiments were also carried out which were reported in detail in another paper [Tahir et al., Phys. Rev. E 90, 063112 (2014)]. Excellent agreement was found between the experimental measurements and the simulation results that validate our previous simulations done using the Large Hadron Collider (LHC) beam of 7 TeV protons [Tahir et al., Phys. Rev. Spec. Top.--Accel. Beams 15, 051003 (2012)]. According to these simulations, the range of the full LHC proton beam and the hadronic shower can be increased by more than an order of magnitude due to the hydrodynamic tunneling, compared to that of a single proton. This effect is of considerable importance for the design of machine protection system for hadron accelerators such as SPS, LHC, and Future Circular Collider. Recently, using metal cutting technology, the targets used in these experiments have been dissected into finer pieces for visual and microscopic inspection in order to establish the precise penetration depth of the protons and the corresponding hadronic shower. This, we believe will be helpful in studying the very important phenomenon of hydrodynamic tunneling in a more quantitative manner. The details of this experimental work together with a comparison with the numerical simulations are presented in this paper.
A new Scheme for ATLAS Trigger Simulation using Legacy Code
NASA Astrophysics Data System (ADS)
Galster, Gorm; Stelzer, Joerg; Wiedenmann, Werner
2014-06-01
Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total. We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.
Testing beam-induced quench levels of LHC superconducting magnets
NASA Astrophysics Data System (ADS)
Auchmann, B.; Baer, T.; Bednarek, M.; Bellodi, G.; Bracco, C.; Bruce, R.; Cerutti, F.; Chetvertkova, V.; Dehning, B.; Granieri, P. P.; Hofle, W.; Holzer, E. B.; Lechner, A.; Nebot Del Busto, E.; Priebe, A.; Redaelli, S.; Salvachua, B.; Sapinski, M.; Schmidt, R.; Shetty, N.; Skordis, E.; Solfaroli, M.; Steckert, J.; Valuch, D.; Verweij, A.; Wenninger, J.; Wollmann, D.; Zerlauth, M.
2015-06-01
In the years 2009-2013 the Large Hadron Collider (LHC) has been operated with the top beam energies of 3.5 and 4 TeV per proton (from 2012) instead of the nominal 7 TeV. The currents in the superconducting magnets were reduced accordingly. To date only seventeen beam-induced quenches have occurred; eight of them during specially designed quench tests, the others during injection. There has not been a single beam-induced quench during normal collider operation with stored beam. The conditions, however, are expected to become much more challenging after the long LHC shutdown. The magnets will be operating at near nominal currents, and in the presence of high energy and high intensity beams with a stored energy of up to 362 MJ per beam. In this paper we summarize our efforts to understand the quench levels of LHC superconducting magnets. We describe beam-loss events and dedicated experiments with beam, as well as the simulation methods used to reproduce the observable signals. The simulated energy deposition in the coils is compared to the quench levels predicted by electrothermal models, thus allowing one to validate and improve the models which are used to set beam-dump thresholds on beam-loss monitors for run 2.
Signals of New Physics in the Underlying Event
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harnik, Roni; /Stanford U., ITP /SLAC; Wizansky, Tommer
2010-06-11
LHC searches for new physics focus on combinations of hard physics objects. In this work we propose a qualitatively different soft signal for new physics at the LHC - the 'anomalous underlying event'. Every hard LHC event will be accompanied by a soft underlying event due to QCD and pile-up effects. Though it is often used for QCD and monte carlo studies, here we propose the incorporation of an underlying event analysis in some searches for new physics. An excess of anomalous underlying events may be a smoking-gun signal for particular new physics scenarios such as 'quirks' or 'hidden valleys'more » in which large amounts of energy may be emitted by a large multiplicity of soft particles. We discuss possible search strategies for such soft diffuse signals in the tracking system and calorimetry of the LHC experiments. We present a detailed study of the calorimetric signal in a concrete example, a simple quirk model motivated by folded supersymmetry. In these models the production and radiative decay of highly excited quirk bound states leads to an 'antenna pattern' of soft unclustered energy. Using a dedicated simulation of a toy detector and a 'CMB-like' multipole analysis we compare the signal to the expected backgrounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hektor, Andi; Marzola, Luca; Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu
Motivated by the recent indications for a 750 GeV resonance in the di-photon final state at the LHC, in this work we analyse the compatibility of the excess with the broad photon excess detected at the Galactic Centre. Intriguingly, by analysing the parameter space of an effective models where a 750 GeV pseudoscalar particles mediates the interaction between the Standard Model and a scalar dark sector, we prove the compatibility of the two signals. We show, however, that the LHC mono-jet searches and the Fermi LAT measurements strongly limit the viable parameter space. We comment on the possible impact ofmore » cosmic antiproton flux measurement by the AMS-02 experiment.« less
Commissioning of the ATLAS pixel detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
ATLAS Collaboration; Golling, Tobias
2008-09-01
The ATLAS pixel detector is a high precision silicon tracking device located closest to the LHC interaction point. It belongs to the first generation of its kind in a hadron collider experiment. It will provide crucial pattern recognition information and will largely determine the ability of ATLAS to precisely track particle trajectories and find secondary vertices. It was the last detector to be installed in ATLAS in June 2007, has been fully connected and tested in-situ during spring and summer 2008, and is ready for the imminent LHC turn-on. The highlights of the past and future commissioning activities of themore » ATLAS pixel system are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pankov, A. A., E-mail: pankov@ictp.it; Serenkova, I. A., E-mail: inna.serenkova@cern.ch; Tsytrinov, A. V., E-mail: tsytrin@gstu.by
2015-06-15
Prospects of discovering and identifying effects of extra spatial dimensions in dilepton and diphoton production at the Large Hadron Collider (LHC) are studied. Such effects may be revealed by the characteristic behavior of the invariant-mass distributions of dileptons and diphotons, and their identification can be performed on the basis of an analysis of their angular distributions. The discovery and identification reaches are estimated for the scale parameter M{sub S} of the Kaluza-Klein gravitational towers, which can be determined in experiments devoted to measuring the dilepton and diphoton channels at the LHC.
New developments in CVD diamond for detector applications
NASA Astrophysics Data System (ADS)
Adam, W.; Berdermann, E.; Bergonzo, P.; de Boer, W.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Dulinski, W.; Doroshenko, J.; van Eijk, B.; Fallou, A.; Fischer, P.; Fizzotti, F.; Furetta, C.; Gan, K. K.; Ghodbane, N.; Grigoriev, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kaplon, J.; Kass, R.; Keil, M.; Knoepfle, K. T.; Koeth, T.; Krammer, M.; Logiudice, A.; Lu, R.; Mac Lynne, L.; Manfredotti, C.; Meier, D.; Menichelli, D.; Meuser, S.; Mishina, M.; Moroni, L.; Noomen, J.; Oh, A.; Pernicka, M.; Perera, L.; Potenza, R.; Riester, J. L.; Roe, S.; Rudge, A.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Sutera, C.; Trischuk, W.; Tromson, D.; Tuve, C.; Vincenzo, B.; Weilhammer, P.; Wermes, N.; Wetstein, M.; Zeuner, W.; Zoeller, M.
Chemical Vapor Deposition (CVD) diamond has been discussed extensively as an alternative sensor material for use very close to the interaction region of the LHC and other machines where extreme radiation conditions exist. During the last seven years the RD42 collaboration has developed diamond detectors and tested them with LHC electronics towards the end of creating a device usable by experiments. The most recent results of this work are presented. Recently, a new form of CVD diamond has been developed: single crystal CVD diamond which resolves many of the issues associated with poly-crystalline CVD material. The first tests of this material are also presented.
Fermilab Heroes of the LHC: Steve Nahn and Vivian O’Dell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nahn, Steve; O’Dell, Vivian
2017-09-11
The experiments based at the Large Hadron Collider in Switzerland are undergoing a constant series of upgrades. Fermilab scientists Steve Nahn and Vivian O’Dell lead these upgrade efforts in the United States.
NASA Astrophysics Data System (ADS)
Acharya, B.; Alexandre, J.; Baines, S.; Benes, P.; Bergmann, B.; Bernabéu, J.; Branzas, H.; Campbell, M.; Caramete, L.; Cecchini, S.; de Montigny, M.; De Roeck, A.; Ellis, J. R.; Fairbairn, M.; Felea, D.; Flores, J.; Frank, M.; Frekers, D.; Garcia, C.; Hirt, A. M.; Janecek, J.; Kalliokoski, M.; Katre, A.; Kim, D.-W.; Kinoshita, K.; Korzenev, A.; Lacarrère, D. H.; Lee, S. C.; Leroy, C.; Lionti, A.; Mamuzic, J.; Margiotta, A.; Mauri, N.; Mavromatos, N. E.; Mermod, P.; Mitsou, V. A.; Orava, R.; Parker, B.; Pasqualini, L.; Patrizii, L.; Pǎvǎlaş, G. E.; Pinfold, J. L.; Popa, V.; Pozzato, M.; Pospisil, S.; Rajantie, A.; Ruiz de Austri, R.; Sahnoun, Z.; Sakellariadou, M.; Sarkar, S.; Semenoff, G.; Shaa, A.; Sirri, G.; Sliwa, K.; Soluk, R.; Spurio, M.; Srivastava, Y. N.; Suk, M.; Swain, J.; Tenti, M.; Togo, V.; Tuszyński, J. A.; Vento, V.; Vives, O.; Vykydal, Z.; Whyntie, T.; Widom, A.; Willems, G.; Yoon, J. H.; Zgura, I. S.; MoEDAL Collaboration
2017-02-01
MoEDAL is designed to identify new physics in the form of long-lived highly ionizing particles produced in high-energy LHC collisions. Its arrays of plastic nuclear-track detectors and aluminium trapping volumes provide two independent passive detection techniques. We present here the results of a first search for magnetic monopole production in 13 TeV proton-proton collisions using the trapping technique, extending a previous publication with 8 TeV data during LHC Run 1. A total of 222 kg of MoEDAL trapping detector samples was exposed in the forward region and analyzed by searching for induced persistent currents after passage through a superconducting magnetometer. Magnetic charges exceeding half the Dirac charge are excluded in all samples and limits are placed for the first time on the production of magnetic monopoles in 13 TeV p p collisions. The search probes mass ranges previously inaccessible to collider experiments for up to five times the Dirac charge.
Characterization of irradiated APDs for picosecond time measurements
NASA Astrophysics Data System (ADS)
Centis Vignali, M.; Dalal, R.; Gallinaro, M.; Harrop, B.; Jain, G.; Lu, C.; McClish, M.; McDonald, K. T.; Moll, M.; Newcomer, F. M.; Ugobono, S. Otero; White, S.
2018-01-01
For their operation at the CERN High Luminosity Large Hadron Collider (HL-LHC), the ATLAS and CMS experiments are planning to implement dedicated systems to measure the time of arrival of minimum ionizing particles with an accuracy of about 30 ps. The timing detectors will be subjected to radiation levels corresponding up to a 1-MeV neutrons fluence (Φeq) of 1015 cm-2 for the goal integrated luminosity of HL-LHC of 3000 fb-1. In this paper, deep-diffused Avalanche Photo Diodes (APDs) produced by Radiation Monitoring Devices are examined as candidate timing detectors for HL-LHC applications. These APDs are operated at 1.8 kV, resulting in a gain of up to 500. The timing performance of the detectors is evaluated using a pulsed laser. The effects of radiation damage on current, signal amplitude, noise, and timing performance of the APDs are evaluated using detectors irradiated with neutrons up to Φeq = 1015 cm-2.
Improved performance of the LHCb Outer Tracker in LHC Run 2
NASA Astrophysics Data System (ADS)
d'Argent, P.; Dufour, L.; Grillo, L.; de Vries, J. A.; Ukleja, A.; Aaij, R.; Archilli, F.; Bachmann, S.; Berninghoff, D.; Birnkraut, A.; Blouw, J.; De Cian, M.; Ciezarek, G.; Färber, C.; Demmer, M.; Dettori, F.; Gersabeck, E.; Grabowski, J.; Hulsbergen, W. D.; Khanji, B.; Kolpin, M.; Kucharczyk, M.; Malecki, B. P.; Merk, M.; Mulder, M.; Müller, J.; Mueller, V.; Pellegrino, A.; Pikies, M.; Rachwal, B.; Schmelzer, T.; Spaan, B.; Szczekowski, M.; van Tilburg, J.; Tolk, S.; Tuning, N.; Uwer, U.; Wishahi, J.; Witek, M.
2017-11-01
The LHCb Outer Tracker is a gaseous detector covering an area of 5 × 6 m2 with 12 double layers of straw tubes. The performance of the detector is presented based on data of the LHC Run 2 running period from 2015 and 2016. Occupancies and operational experience for data collected in pp, pPb and PbPb collisions are described. An updated study of the ageing effects is presented showing no signs of gain deterioration or other radiation damage effects. In addition several improvements with respect to LHC Run 1 data taking are introduced. A novel real-time calibration of the time-alignment of the detector and the alignment of the single monolayers composing detector modules are presented, improving the drift-time and position resolution of the detector by 20%. Finally, a potential use of the improved resolution for the timing of charged tracks is described, showing the possibility to identify low-momentum hadrons with their time-of-flight.
Direct and indirect constraints on CP-violating Higgs-quark and Higgs-gluon interactions
Chien, Y. T.; Cirigliano, V.; Dekens, W.; ...
2016-02-01
Here we investigate direct and indirect constraints on the complete set of anomalous CP-violating Higgs couplings to quarks and gluons originating from dimension-6 operators, by studying their signatures at the LHC and in electric dipole moments (EDMs). We also show that existing uncertainties in hadronic and nuclear matrix elements have a significant impact on the interpretation of EDM experiments, and we quantify the improvements needed to fully exploit the power of EDM searches. Currently, the best bounds on the anomalous CP-violating Higgs interactions come from a combination of EDM measurements and the data from LHC Run 1. We argue thatmore » Higgs production cross section and branching ratios measurements at the LHC Run 2 will not improve the constraints significantly. But, the bounds on the couplings scale roughly linearly with EDM limits, so that future theoretical and experimental EDM developments can have a major impact in pinning down interactions of the Higgs.« less
Not-so-well-tempered neutralino
NASA Astrophysics Data System (ADS)
Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel
2017-09-01
Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.
Direct and indirect constraints on CP-violating Higgs-quark and Higgs-gluon interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, Y. T.; Cirigliano, V.; Dekens, W.
Here we investigate direct and indirect constraints on the complete set of anomalous CP-violating Higgs couplings to quarks and gluons originating from dimension-6 operators, by studying their signatures at the LHC and in electric dipole moments (EDMs). We also show that existing uncertainties in hadronic and nuclear matrix elements have a significant impact on the interpretation of EDM experiments, and we quantify the improvements needed to fully exploit the power of EDM searches. Currently, the best bounds on the anomalous CP-violating Higgs interactions come from a combination of EDM measurements and the data from LHC Run 1. We argue thatmore » Higgs production cross section and branching ratios measurements at the LHC Run 2 will not improve the constraints significantly. But, the bounds on the couplings scale roughly linearly with EDM limits, so that future theoretical and experimental EDM developments can have a major impact in pinning down interactions of the Higgs.« less
Two-Layer 16 Tesla Cosθ Dipole Design for the FCC
Holik, Eddie Frank; Ambrosio, Giorgio; Apollinari, G.
2018-02-13
The Future Circular Collider or FCC is a study aimed at exploring the possibility to reach 100 TeV total collision energy which would require 16 tesla dipoles. Upon the conclusion of the High Luminosity Upgrade, the US LHC Accelerator Upgrade Pro-ject in collaboration with CERN will have extensive Nb3Sn magnet fabrication experience. This experience includes robust Nb3Sn conductor and insulation scheming, 2-layer cos2θ coil fabrication, and bladder-and-key structure and assembly. By making im-provements and modification to existing technology the feasibility of a two-layer 16 tesla dipole is investigated. Preliminary designs indicate that fields up to 16.6 tesla are feasible withmore » conductor grading while satisfying the HE-LHC and FCC specifications. Key challenges include accommodating high-aspect ratio conductor, narrow wedge design, Nb3Sn conductor grading, and especially quench protection of a 16 tesla device.« less
Beyond the standard model of particle physics.
Virdee, T S
2016-08-28
The Large Hadron Collider (LHC) at CERN and its experiments were conceived to tackle open questions in particle physics. The mechanism of the generation of mass of fundamental particles has been elucidated with the discovery of the Higgs boson. It is clear that the standard model is not the final theory. The open questions still awaiting clues or answers, from the LHC and other experiments, include: What is the composition of dark matter and of dark energy? Why is there more matter than anti-matter? Are there more space dimensions than the familiar three? What is the path to the unification of all the fundamental forces? This talk will discuss the status of, and prospects for, the search for new particles, symmetries and forces in order to address the open questions.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'. © 2016 The Author(s).
Two-Layer 16 T Cos θ Dipole Design for the FCC
Holik, Eddie Frank; Ambrosio, Giorgio; Apollinari, Giorgio
2018-02-22
Here, the Future Circular Collider or FCC is a study aimed at exploring the possibility to reach 100 TeV total collision energy which would require 16 tesla dipoles. Upon the conclusion of the High Luminosity Upgrade, the US LHC Accelerator Upgrade Pro-ject in collaboration with CERN will have extensive Nb 3Sn magnet fabrication experience. This experience includes robust Nb 3Sn conductor and insulation scheming, 2-layer cos2θ coil fabrication, and bladder-and-key structure and assembly. By making im-provements and modification to existing technology the feasibility of a two-layer 16 tesla dipole is investigated. Preliminary designs indicate that fields up to 16.6 teslamore » are feasible with conductor grading while satisfying the HE-LHC and FCC specifications. Key challenges include accommodating high-aspect ratio conductor, narrow wedge design, Nb 3Sn conductor grading, and especially quench protection of a 16 tesla device.« less
Two-Layer 16 T Cos θ Dipole Design for the FCC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Eddie Frank; Ambrosio, Giorgio; Apollinari, Giorgio
Here, the Future Circular Collider or FCC is a study aimed at exploring the possibility to reach 100 TeV total collision energy which would require 16 tesla dipoles. Upon the conclusion of the High Luminosity Upgrade, the US LHC Accelerator Upgrade Pro-ject in collaboration with CERN will have extensive Nb 3Sn magnet fabrication experience. This experience includes robust Nb 3Sn conductor and insulation scheming, 2-layer cos2θ coil fabrication, and bladder-and-key structure and assembly. By making im-provements and modification to existing technology the feasibility of a two-layer 16 tesla dipole is investigated. Preliminary designs indicate that fields up to 16.6 teslamore » are feasible with conductor grading while satisfying the HE-LHC and FCC specifications. Key challenges include accommodating high-aspect ratio conductor, narrow wedge design, Nb 3Sn conductor grading, and especially quench protection of a 16 tesla device.« less
Web Based Monitoring in the CMS Experiment at CERN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badgett, William; Borrello, Laura; Chakaberia, Irakli
2014-09-03
The Compact Muon Solenoid (CMS) is a large and complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To this end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to themore » experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user side. This paper describes the WBM system architecture and describes how the system was used during the first major data taking run of the LHC.« less
The production deployment of IPv6 on WLCG
NASA Astrophysics Data System (ADS)
Bernier, J.; Campana, S.; Chadwick, K.; Chudoba, J.; Dewhurst, A.; Eliáš, M.; Fayer, S.; Finnern, T.; Grigoras, C.; Hartmann, T.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Macmahon, E.; Martelli, E.; Millar, A. P.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Voicu, R.; Walker, C. J.; Wildish, T.
2015-12-01
The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. Following feedback from our working group, many of the storage technologies in use on WLCG have recently been made IPv6-capable. This paper presents the IPv6 requirements, tests and plans of the LHC experiments together with the tests performed on the group's IPv6 test-bed. This is primarily aimed at IPv6-only worker nodes or VMs accessing several different implementations of a global dual-stack federated storage service. Finally the plans for deployment of production dual-stack WLCG services are presented.
Evolution of the ATLAS PanDA workload management system for exascale computational science
NASA Astrophysics Data System (ADS)
Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration
2014-06-01
An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.
Development of a timing detector for the TOTEM experiment at the LHC
NASA Astrophysics Data System (ADS)
Minafra, Nicola
2017-09-01
The upgrade program of the TOTEM experiment will include the installation of timing detectors inside vertical Roman Pots to allow the reconstruction of the longitudinal vertex position in the presence of event pile-up in high- β^{\\ast} dedicated runs. The small available space inside the Roman Pot, optimized for high-intensity LHC runs, and the required time precision led to the study of a solution using single crystal CVD diamonds. The sensors are read out using fast low-noise front-end electronics developed by the TOTEM Collaboration, achieving a signal-to-noise ratio larger than 20 for MIPs. A prototype was designed, manufactured and tested during a test beam campaign, proving a time precision below 100ps and an efficiency above 99%. The geometry of the detector has been designed to guarantee uniform occupancy in the expected running conditions keeping, at the same time, the number of channels below 12. The read-out electronics was developed during an extensive campaign of beam tests dedicated first to the characterization of existing solution and then to the optimization of the electronics designed within the Collaboration. The detectors were designed to be read out using the SAMPIC chip, a fast sampler designed specifically for picosecond timing measurements with high-rate capabilities; later, a modified version was realized using the HPTDC to achieve the higher trigger rates required for the CT-PPS experiment. The first set of prototypes was successfully installed and tested in the LHC in November 2015; moreover the detectors modified for CT-PPS are successfully part of the global CMS data taking since October 2016.
A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era
NASA Astrophysics Data System (ADS)
Newman, H.
2011-12-01
Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform the continent.
R-Axion: A New LHC Physics Signature Involving Muon Pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Hock-Seng; /UC, Berkeley /LBL, Berkeley; Ibe, Masahiro
2012-04-12
In a class of models with gauge mediated supersymmetry breaking, the existence of a light pseudo scalar particle, R-axion, with a mass in hundreds MeV range is predicted. The striking feature of such a light R-axion is that it mainly decays into a pair of muons and leaves a displaced vertex inside detectors once it is produced. In this talk, we show how we can search for the R-axion at the coming LHC experiments. The one main goal of the LHC experiments is discovering supersymmetry which has been anticipated for a long time to solve the hierarchy problem. Once themore » supersymmetric standard model (SSM) is confirmed experimentally, the next question is how the supersymmetry is broken and how the effects of symmetry breaking are mediated to the SSM sector. In most cases, such investigations on 'beyond the SSM physics' rely on arguments based on extrapolations of the observed supersymmetry mass parameters to higher energies. However, there is one class of models of supersymmetry breaking where we can get a direct glimpse of the structure of the hidden sector with the help of the R-symmetry. The R-symmetry plays an important role in rather generic models of spontaneous supersymmetry breaking. At the same time, however, it must be broken in some way in order for the gauginos in the SSM sector to have non-vanishing masses. One possibility of the gaugino mass generation is to consider models where the gaugino masses are generated as a result of the explicit breaking of the R-symmetries. Unfortunately, in those models, the R-symmetry leaves little trace for the collider experiments, since the mass of the R-axion is typically heavy and beyond the reach of the LHC experiments. In this talk, instead, we consider a class of models with gauge mediation where the R-symmetry in the hidden/messenger sectors is exact in the limit of the infinite reduced Planck scale, i.e. M{sub PL} {yields} {infinity}. In this case, the gaugino masses are generated only after the R-symmetry is broken spontaneously. We also assume that the R-symmetry is respected by the SSM sector as well as the origin of the higgsino mass {mu} and the Higgs mass mixing B{mu} at the classical level. We call this scenario, the minimal R-symmetry breaking scenario.« less
NASA Astrophysics Data System (ADS)
Avolio, G.; D'Ascanio, M.; Lehmann-Miotto, G.; Soloviev, I.
2017-10-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider at CERN is composed of a large number of distributed hardware and software components (about 3000 computers and more than 25000 applications) which, in a coordinated manner, provide the data-taking functionality of the overall system. During data taking runs, a huge flow of operational data is produced in order to constantly monitor the system and allow proper detection of anomalies or misbehaviours. In the ATLAS trigger and data acquisition system, operational data are archived and made available to applications by the P-BEAST (Persistent Back-End for the Atlas Information System of TDAQ) service, implementing a custom time-series database. The possibility to efficiently visualize both realtime and historical operational data is a great asset facilitating both online identification of problems and post-mortem analysis. This paper will present a web-based solution developed to achieve such a goal: the solution leverages the flexibility of the P-BEAST archiver to retrieve data, and exploits the versatility of the Grafana dashboard builder to offer a very rich user experience. Additionally, particular attention will be given to the way some technical challenges (like the efficient visualization of a huge amount of data and the integration of the P-BEAST data source in Grafana) have been faced and solved.
The CMSSM and NUHM1 after LHC Run 1
Buchmueller, O.; De Roeck, A.; Cavanaugh, R.; ...
2014-06-13
We analyze the impact of data from the full Run 1 of the LHC at 7 and 8 TeV on the CMSSM with μ > 0 and < 0 and the NUHM1 with μ > 0, incorporating the constraints imposed by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment. We use the following results from the LHC experiments: ATLAS searches for events with E/ T accompanied by jets with the full 7 and 8 TeV data, themore » ATLAS and CMS measurements of the mass of the Higgs boson, the CMS searches for heavy neutral Higgs bosons and a combination of the LHCb and CMS measurements of BR(B s → μ +μ –) and BR(B d → μ +μ –). Our results are based on samplings of the parameter spaces of the CMSSM for both μ > 0 and μ < 0 and of the NUHM1 for μ > 0 with 6.8×10 6, 6.2×10 6 and 1.6×10 7 points, respectively, obtained using the MultiNest tool. The impact of the Higgs-mass constraint is assessed using FeynHiggs 2.10.0, which provides an improved prediction for the masses of the MSSM Higgs bosons in the region of heavy squark masses. It yields in general larger values of M h than previous versions of FeynHiggs, reducing the pressure on the CMSSM and NUHM1. We find that the global χ 2 functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs-mass and the E/ T searches, with best-fit values that are comparable to the χ 2/dof for the best Standard Model fit. As a result, we provide 95% CL lower limits on the masses of various sparticles and assess the prospects for observing them during Run 2 of the LHC.« less
Taking Energy to the Physics Classroom from the Large Hadron Collider at CERN
ERIC Educational Resources Information Center
Cid, Xabier; Cid, Ramon
2009-01-01
In 2008, the greatest experiment in history began. When in full operation, the Large Hadron Collider (LHC) at CERN will generate the greatest amount of information that has ever been produced in an experiment before. It will also reveal some of the most fundamental secrets of nature. Despite the enormous amount of information available on this…
Forward and small-x QCD physics results from CMS experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerci, Deniz Sunar, E-mail: deniz.sunar.cerci@cern.ch
2016-03-25
The Compact Muon Solenoid (CMS) is one of the two large, multi-purpose experiments at the Large Hadron Collider (LHC) at CERN. During the Run I Phase a large pp collision dataset has been collected and the CMS collaboration has explored measurements that shed light on a new era. Forward and small-x quantum chromodynamics (QCD) physics measurements with CMS experiment covers a wide range of physics subjects. Some of highlights in terms of testing the very low-x QCD, underlying event and multiple interaction characteristics, photon-mediated processes, jets with large rapidity separation at high pseudo-rapidities and the inelastic proton-proton cross section dominatedmore » by diffractive interactions are presented. Results are compared to Monte Carlo (MC) models with different parameter tunes for the description of the underlying event and to perturbative QCD calculations. The prominent role of multi-parton interactions has been confirmed in the semihard sector but no clear deviation from the standard Dglap parton evolution due to Bfkl has been observed. An outlook to the prospects at 13 TeV is given.« less
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; ...
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques aremore » examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.« less
Searching for new physics at the frontiers with lattice quantum chromodynamics.
Van de Water, Ruth S
2012-07-01
Numerical lattice-quantum chromodynamics (QCD) simulations, when combined with experimental measurements, allow the determination of fundamental parameters of the particle-physics Standard Model and enable searches for physics beyond-the-Standard Model. We present the current status of lattice-QCD weak matrix element calculations needed to obtain the elements and phase of the Cabibbo-Kobayashi-Maskawa (CKM) matrix and to test the Standard Model in the quark-flavor sector. We then discuss evidence that may hint at the presence of new physics beyond the Standard Model CKM framework. Finally, we discuss two opportunities where we expect lattice QCD to play a pivotal role in searching for, and possibly discovery of, new physics at upcoming high-intensity experiments: rare decays and the muon anomalous magnetic moment. The next several years may witness the discovery of new elementary particles at the Large Hadron Collider (LHC). The interplay between lattice QCD, high-energy experiments at the LHC, and high-intensity experiments will be needed to determine the underlying structure of whatever physics beyond-the-Standard Model is realized in nature. © 2012 New York Academy of Sciences.
First LHCb measurement with data from the LHC Run 2
NASA Astrophysics Data System (ADS)
Anderlini, L.; Amerio, S.
2017-01-01
LHCb has recently introduced a novel real-time detector alignment and calibration strategy for the Run 2. Data collected at the start of each LHC fill are processed in few minutes and used to update the alignment. On the other hand, the calibration constants will be evaluated for each run of data taking. An increase in the CPU and disk capacity of the event filter farm, combined with improvements to the reconstruction software, allow for efficient, exclusive selections already in the first stage of the High Level Trigger (HLT1), while the second stage, HLT2, performs complete, offline-quality, event reconstruction. In Run 2, LHCb will collect the largest data sample of charm mesons ever recorded. Novel data processing and analysis techniques are required to maximise the physics potential of this data sample with the available computing resources, taking into account data preservation constraints. In this write-up, we describe the full analysis chain used to obtain important results analysing the data collected in proton-proton collisions in 2015, such as the J/ψ and open charm production cross-sections, and consider the further steps required to obtain real-time results after the LHCb upgrade.
LEMON - LHC Era Monitoring for Large-Scale Infrastructures
NASA Astrophysics Data System (ADS)
Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron
2011-12-01
At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.
File-Based Data Flow in the CMS Filter Farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
2015-12-23
During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes aremore » also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.« less
Managing the CMS Data and Monte Carlo Processing during LHC Run 2
NASA Astrophysics Data System (ADS)
Wissing, C.;
2017-10-01
In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.
The Resource Manager the ATLAS Trigger and Data Acquisition System
NASA Astrophysics Data System (ADS)
Aleksandrov, I.; Avolio, G.; Lehmann Miotto, G.; Soloviev, I.
2017-10-01
The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager “server” application and the “client” shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager client library provides remote access to the “server” (i.e., to allocate and free resources, to query about the status of resources). During the LHC’s Long Shutdown period, the Resource Manager’s requirements have been reviewed at the light of the experience gained during the LHC’s Run 1. As a consequence, the Resource Manager has undergone a full re-design and re-implementation cycle with the result of a reduction of the code base by 40% with respect to the previous implementation. This contribution will focus on the way the design and the implementation of the Resource Manager could leverage the new features available in the C++11 standard, and how the introduction of external libraries (like Boost multi-container) led to a more maintainable system. Additionally, particular attention will be given to the technical solutions adopted to ensure the Resource Manager could effort the typical requests rates of the Data Acquisition system, which is about 30000 requests in a time window of few seconds coming from more than 1000 clients.
CERN experience and strategy for the maintenance of cryogenic plants and distribution systems
NASA Astrophysics Data System (ADS)
Serio, L.; Bremer, J.; Claudet, S.; Delikaris, D.; Ferlin, G.; Pezzetti, M.; Pirotte, O.; Tavian, L.; Wagner, U.
2015-12-01
CERN operates and maintains the world largest cryogenic infrastructure ranging from ageing installations feeding detectors, test facilities and general services, to the state-of-the-art cryogenic system serving the flagship LHC machine complex. After several years of exploitation of a wide range of cryogenic installations and in particular following the last two years major shutdown to maintain and consolidate the LHC machine, we have analysed and reviewed the maintenance activities to implement an efficient and reliable exploitation of the installations. We report the results, statistics and lessons learned on the maintenance activities performed and in particular the required consolidations and major overhauling, the organization, management and methodologies implemented.
First Results of an “Artificial Retina” Processor Prototype
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; ...
2016-11-15
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
Parton distributions in the LHC era
NASA Astrophysics Data System (ADS)
Del Debbio, Luigi
2018-03-01
Analyses of LHC (and other!) experiments require robust and statistically accurate determinations of the structure of the proton, encoded in the parton distribution functions (PDFs). The standard description of hadronic processes relies on factorization theorems, which allow a separation of process-dependent short-distance physics from the universal long-distance structure of the proton. Traditionally the PDFs are obtained from fits to experimental data. However, understanding the long-distance properties of hadrons is a nonperturbative problem, and lattice QCD can play a role in providing useful results from first principles. In this talk we compare the different approaches used to determine PDFs, and try to assess the impact of existing, and future, lattice calculations.
Bottom-quark fusion processes at the LHC for probing Z' models and B -meson decay anomalies
NASA Astrophysics Data System (ADS)
Abdullah, Mohammad; Dalchenko, Mykhailo; Dutta, Bhaskar; Eusebi, Ricardo; Huang, Peisi; Kamon, Teruki; Rathjens, Denis; Thompson, Adrian
2018-04-01
We investigate models of a heavy neutral gauge boson Z' coupling mostly to third generation quarks and second generation leptons. In this scenario, bottom quarks arising from gluon splitting can fuse into Z' allowing the LHC to probe it. In the generic framework presented, anomalies in B -meson decays reported by the LHCb experiment imply a flavor-violating b s coupling of the featured Z' constraining the lowest possible production cross section. A novel approach searching for a Z'(→μ μ ) in association with at least one bottom-tagged jet can probe regions of model parameter space existing analyses are not sensitive to.
New LUX result constrains exotic quark mediators with the vector dark matter
NASA Astrophysics Data System (ADS)
Chen, Chuan-Ren; Li, Ming-Jie
2016-12-01
The scenario of the compressed mass spectrum between heavy quark and dark matter is a challenge for LHC searches. However, the elastic scattering cross-section between dark matter and nuclei in dark matter direct detection experiments can be enhanced with nearly degenerate masses between heavy quarks and dark matter. In this paper, we illustrate such scenario with a vector dark matter, using the latest result from LUX 2016. The mass constraints on heavy quarks can be more stringent than current limits from LHC, unless the coupling strength is very small. However, the compress mass spectrum with allowed tiny coupling strength makes the decay lifetime of heavy quarks longer than the timescale of QCD hadronization.
Aad, G; Abbott, B; Abdallah, J; Abdelalim, A A; Abdesselam, A; Abdinov, O; Abi, B; Abolins, M; Abramowicz, H; Abreu, H; Acerbi, E; Acharya, B S; Ackers, M; Adams, D L; Addy, T N; Adelman, J; Aderholz, M; Adomeit, S; Adorisio, C; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Aharrouche, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahmed, H; Ahsan, M; Aielli, G; Akdogan, T; Akesson, T P A; Akimoto, G; Akimov, A V; Aktas, A; Alam, M S; Alam, M A; Albrand, S; Aleksa, M; Aleksandrov, I N; Aleppo, M; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Aliyev, M; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alonso, J; Alviggi, M G; Amako, K; Amaral, P; Ambrosio, G; Amelung, C; Ammosov, V V; Amorim, A; Amorós, G; Amram, N; Anastopoulos, C; Andeen, T; Anders, C F; Anderson, K J; Andreazza, A; Andrei, V; Andrieux, M-L; Anduaga, X S; Angerami, A; Anghinolfi, F; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonelli, S; Antos, J; Antunovic, B; Anulli, F; Aoun, S; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Archambault, J P; Arfaoui, S; Arguin, J-F; Argyropoulos, T; Arik, E; Arik, M; Armbruster, A J; Arms, K E; Armstrong, S R; Arnaez, O; Arnault, C; Artamonov, A; Arutinov, D; Asai, M; Asai, S; Asfandiyarov, R; Ask, S; Asman, B; Asner, D; Asquith, L; Assamagan, K; Astbury, A; Astvatsatourov, A; Atoian, G; Aubert, B; Auerbach, B; Auge, E; Augsten, K; Aurousseau, M; Austin, N; Avolio, G; Avramidou, R; Axen, D; Ay, C; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Bachy, G; Backes, M; Badescu, E; Bagnaia, P; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, M D; Baker, S; Baltasar Dos Santos Pedrosa, F; Banas, E; Banerjee, P; Banerjee, S; Banfi, D; Bangert, A; Bansal, V; Baranov, S P; Baranov, S; Barashkou, A; Barbaro Galtieri, A; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, M; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Barrillon, P; Bartoldus, R; Bartsch, D; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Battistoni, G; Bauer, F; Bawa, H S; Bazalova, M; Beare, B; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, G A; Beck, H P; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C; Begel, M; Behar Harpaz, S; Behera, P K; Beimforde, M; Belanger-Champagne, C; Belhorma, B; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellina, F; Bellomo, G; Bellomo, M; Belloni, A; Belotskiy, K; Beltramello, O; Ben Ami, S; Benary, O; Benchekroun, D; Benchouk, C; Bendel, M; Benedict, B H; Benekos, N; Benhammou, Y; Benincasa, G P; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernardet, K; Bernat, P; Bernhard, R; Bernius, C; Berry, T; Bertin, A; Bertinelli, F; Bertolucci, F; Bertolucci, S; Besana, M I; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianco, M; Biebel, O; Biesiada, J; Biglietti, M; Bilokon, H; Binder, M; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bischof, R; Bitenc, U; Black, K M; Blair, R E; Blanchard, J-B; Blanchot, G; Blocker, C; Blocki, J; Blondel, A; Blum, W; Blumenschein, U; Boaretto, C; Bobbink, G J; Bocci, A; Bocian, D; Bock, R; Boddy, C R; Boehler, M; Boek, J; Boelaert, N; Böser, S; Bogaerts, J A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Bondarenko, V G; Bondioli, M; Boonekamp, M; Boorman, G; Booth, C N; Booth, P; Booth, J R A; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borroni, S; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Botterill, D; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boulahouache, C; Bourdarios, C; Boveia, A; Boyd, J; Boyko, I R; Bozhko, N I; Bozovic-Jelisavcic, I; Braccini, S; Bracinik, J; Braem, A; Brambilla, E; Branchini, P; Brandenburg, G W; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brelier, B; Bremer, J; Brenner, R; Bressler, S; Breton, D; Brett, N D; Bright-Thomas, P G; Britton, D; Brochu, F M; Brock, I; Brock, R; Brodbeck, T J; Brodet, E; Broggi, F; Bromberg, C; Brooijmans, G; Brooks, W K; Brown, G; Brubaker, E; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Buanes, T; Bucci, F; Buchanan, J; Buchanan, N J; Buchholz, P; Buckingham, R M; Buckley, A G; Budagov, I A; Budick, B; Büscher, V; Bugge, L; Buira-Clark, D; Buis, E J; Bulekov, O; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Bussey, P; Buszello, C P; Butin, F; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Byatt, T; Caballero, J; Cabrera Urbán, S; Caccia, M; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Camard, A; Camarri, P; Cambiaghi, M; Cameron, D; Cammin, J; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Capasso, L; Capeans Garrido, M D M; Caprini, I; Caprini, M; Caprio, M; Capriotti, D; Capua, M; Caputo, R; Caramarcu, C; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, B; Caron, S; Carpentieri, C; Carrillo Montoya, G D; Carron Montero, S; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda Hernandez, A M; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Cataneo, F; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cauz, D; Cavallari, A; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Cazzato, A; Ceradini, F; Cerna, C; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cervetto, M; Cetin, S A; Cevenini, F; Chafaq, A; Chakraborty, D; Chan, K; Chapman, J D; Chapman, J W; Chareyre, E; Charlton, D G; Chavda, V; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chen, H; Chen, L; Chen, S; Chen, T; Chen, X; Cheng, S; Cheplakov, A; Chepurnov, V F; Cherkaoui El Moursli, R; Tcherniatine, V; Chesneanu, D; Cheu, E; Cheung, S L; Chevalier, L; Chevallier, F; Chiarella, V; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chizhov, M V; Choudalakis, G; Chouridou, S; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciobotaru, M D; Ciocca, C; Ciocio, A; Cirilli, M; Citterio, M; Clark, A; Clark, P J; Cleland, W; Clemens, J C; Clement, B; Clement, C; Clifft, R W; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coe, P; Coelli, S; Coggeshall, J; Cogneras, E; Cojocaru, C D; Colas, J; Cole, B; Colijn, A P; Collard, C; Collins, N J; Collins-Tooth, C; Collot, J; Colon, G; Coluccia, R; Comune, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, M; Constantinescu, S; Conta, C; Conventi, F; Cook, J; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Correard, S; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Costin, T; Côté, D; Coura Torres, R; Courneyea, L; Cowan, G; Cowden, C; Cox, B E; Cranmer, K; Cranshaw, J; Cristinziani, M; Crosetti, G; Crupi, R; Crépé-Renaudin, S; Cuenca Almenar, C; Cuhadar Donszelmann, T; Cuneo, S; Curatolo, M; Curtis, C J; Cwetanski, P; Czirr, H; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Rocha Gesualdi Mello, A; Da Silva, P V M; Da Via, C; Dabrowski, W; Dahlhoff, A; Dai, T; Dallapiccola, C; Dallison, S J; Dalmau, J; Daly, C H; Dam, M; Dameri, M; Danielsson, H O; Dankers, R; Dannheim, D; Dao, V; Darbo, G; Darlea, G L; Daum, C; Dauvergne, J P; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, M; Davison, A R; Dawe, E; Dawson, I; Dawson, J W; Daya, R K; De, K; de Asmundis, R; De Castro, S; De Castro Faria Salgado, P E; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Cruz-Burelo, E; De La Taille, C; De Lotto, B; De Mora, L; De Nooij, L; De Oliveira Branco, M; De Pedis, D; de Saintignon, P; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dean, S; Dedes, G; Dedovich, D V; Defay, P O; Degenhardt, J; Dehchar, M; Deile, M; Del Papa, C; Del Peso, J; Del Prete, T; Dell'acqua, A; Dell'asta, L; Della Pietra, M; Della Volpe, D; Delmastro, M; Delpierre, P; Delruelle, N; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Deng, J; Deng, W; Denisov, S P; Dennis, C; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deviveiros, P O; Dewhurst, A; Dewilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diaz Gomez, M M; Diblen, F; Diehl, E B; Dietl, H; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djilkibaev, R; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobinson, R; Dobos, D; Dobson, E; Dobson, M; Dodd, J; Dogan, O B; Doglioni, C; Doherty, T; Doi, Y; Dolejsi, J; Dolenc, I; Dolezal, Z; Dolgoshein, B A; Dohmae, T; Donega, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dosil, M; Dotti, A; Dova, M T; Dowell, J D; Doxiadis, A; Doyle, A T; Drasal, Z; Drees, J; Dressnandt, N; Drevermann, H; Driouichi, C; Dris, M; Drohan, J G; Dubbert, J; Dubbs, T; Dube, S; Duchovni, E; Duckeck, G; Dudarev, A; Dudziak, F; Dührssen, M; Duerdoth, I P; Duflot, L; Dufour, M-A; Dunford, M; Duran Yildiz, H; Dushkin, A; Duxfield, R; Dwuznik, M; Dydak, F; Dzahini, D; Düren, M; Ebenstein, W L; Ebke, J; Eckert, S; Eckweiler, S; Edmonds, K; Edwards, C A; Efthymiopoulos, I; Egorov, K; Ehrenfeld, W; Ehrich, T; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Ely, R; Emeliyanov, D; Engelmann, R; Engl, A; Epp, B; Eppig, A; Erdmann, J; Ereditato, A; Eremin, V; Eriksson, D; Ermoline, I; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evans, H; Evdokimov, V N; Fabbri, L; Fabre, C; Facius, K; Fakhrutdinov, R M; Falciano, S; Falou, A C; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrington, S M; Farthouat, P; Fasching, D; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Fayard, L; Fazio, S; Febbraro, R; Federic, P; Fedin, O L; Fedorko, I; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Felzmann, C U; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Ferguson, D; Ferland, J; Fernandes, B; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferrer, A; Ferrer, M L; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Ferro, F; Fiascaris, M; Fiedler, F; Filipčič, A; Filippas, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, G; Fischer, P; Fisher, M J; Fisher, S M; Flammer, J; Flechl, M; Fleck, I; Fleckner, J; Fleischmann, P; Fleischmann, S; Flick, T; Flores Castillo, L R; Flowerdew, M J; Föhlisch, F; Fokitis, M; Fonseca Martin, T; Fopma, J; Forbush, D A; Formica, A; Forti, A; Fortin, D; Foster, J M; Fournier, D; Foussat, A; Fowler, A J; Fowler, K; Fox, H; Francavilla, P; Franchino, S; Francis, D; Franklin, M; Franz, S; Fraternali, M; Fratina, S; Freestone, J; French, S T; Froeschl, R; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fuster, J; Gabaldon, C; Gabizon, O; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Gallas, E J; Gallas, M V; Gallo, V; Gallop, B J; Gallus, P; Galyaev, E; Gan, K K; Gao, Y S; Gapienko, V A; Gaponenko, A; Garcia-Sciveres, M; García, C; García Navarro, J E; Gardner, R W; Garelli, N; Garitaonandia, H; Garonne, V; Garvey, J; Gatti, C; Gaudio, G; Gaumer, O; Gautard, V; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gayde, J-C; Gazis, E N; Ge, P; Gee, C N P; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Genest, M H; Gentile, S; Georgatos, F; George, S; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghez, P; Ghodbane, N; Giacobbe, B; Giagu, S; Giakoumopoulou, V; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gieraltowski, G F; Gilbert, L M; Gilchriese, M; Gildemeister, O; Gilewsky, V; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Girtler, P; Giugni, D; Giusti, P; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glitza, K W; Glonti, G L; Gnanvo, K G; Godfrey, J; Godlewski, J; Goebel, M; Göpfert, T; Goeringer, C; Gössling, C; Göttfert, T; Goggi, V; Goldfarb, S; Goldin, D; Golling, T; Gollub, N P; Golovnia, S N; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Gonella, L; Gong, C; Gonidec, A; Gonzalez, S; González de la Hoz, S; Gonzalez Silva, M L; Gonzalez-Pineiro, B; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Gorokhov, S A; Gorski, B T; Goryachev, V N; Gosdzik, B; Gosselink, M; Gostkin, M I; Gouanère, M; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Grabowska-Bold, I; Grabski, V; Grafström, P; Grah, C; Grahn, K-J; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Grau, N; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Green, B; Greenfield, D; Greenshaw, T; Greenwood, Z D; Gregor, I M; Grenier, P; Grewal, A; Griesmayer, E; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, P L Y; Grishkevich, Y V; Grivaz, J-F; Groer, L S; Grognuz, J; Groh, M; Groll, M; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Gruwe, M; Grybel, K; Guarino, V J; Guicheney, C; Guida, A; Guillemin, T; Guler, H; Gunther, J; Guo, B; Gupta, A; Gusakov, Y; Gushchin, V N; Gutierrez, A; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Haboubi, G; Hackenburg, R; Hadavand, H K; Hadley, D R; Haeberli, C; Haefner, P; Härtel, R; Hahn, F; Haider, S; Hajduk, Z; Hakobyan, H; Haller, J; Hallewell, G D; Hamacher, K; Hamilton, A; Hamilton, S; Han, H; Han, L; Hanagaki, K; Hance, M; Handel, C; Hanke, P; Hansen, C J; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansl-Kozanecka, T; Hansson, P; Hara, K; Hare, G A; Harenberg, T; Harper, R; Harrington, R D; Harris, O M; Harrison, K; Hart, J C; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hashemi, K; Hassani, S; Hatch, M; Hauff, D; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawes, B M; Hawkes, C M; Hawkings, R J; Hawkins, D; Hayakawa, T; Hayward, H S; Haywood, S J; Hazen, E; He, M; Head, S J; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heinemann, F E W; Heisterkamp, S; Helary, L; Heldmann, M; Heller, M; Hellman, S; Helsens, C; Hemperek, T; Henderson, R C W; Hendriks, P J; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Henry-Couannier, F; Hensel, C; Henss, T; Hernández Jiménez, Y; Hershenhorn, A D; Herten, G; Hertenberger, R; Hervas, L; Hessey, N P; Hidvegi, A; Higón-Rodriguez, E; Hill, D; Hill, J C; Hill, N; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hindson, D; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holder, M; Hollins, T I; Hollyman, G; Holmes, A; Holmgren, S O; Holy, T; Holzbauer, J L; Homer, R J; Homma, Y; Horazdovsky, T; Horn, C; Horner, S; Horvat, S; Hostachy, J-Y; Hott, T; Hou, S; Houlden, M A; Hoummada, A; Howell, D F; Hrivnac, J; Hruska, I; Hryn'ova, T; Hsu, P J; Hsu, S-C; Huang, G S; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Hughes-Jones, R E; Huhtinen, M; Hurst, P; Hurwitz, M; Husemann, U; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Ichimiya, R; Iconomidou-Fayard, L; Idarraga, J; Idzik, M; Iengo, P; Igonkina, O; Ikegami, Y; Ikeno, M; Ilchenko, Y; Iliadis, D; Imbault, D; Imhaeuser, M; Imori, M; Ince, T; Inigo-Golfin, J; Ioannou, P; Iodice, M; Ionescu, G; Irles Quiles, A; Ishii, K; Ishikawa, A; Ishino, M; Ishmukhametov, R; Isobe, T; Issakov, V; Issever, C; Istin, S; Itoh, Y; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jahoda, M; Jain, V; Jakobs, K; Jakobsen, S; Jakubek, J; Jana, D K; Jankowski, E; Jansen, E; Jantsch, A; Janus, M; Jared, R C; Jarlskog, G; Jeanty, L; Jelen, K; Jen-La Plante, I; Jenni, P; Jeremie, A; Jež, P; Jézéquel, S; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, G; Jin, S; Jinnouchi, O; Joffe, D; Johansen, L G; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, M; Jones, R W L; Jones, T W; Jones, T J; Jonsson, O; Joo, K K; Joos, D; Joram, C; Jorge, P M; Jorgensen, S; Joseph, J; Juranek, V; Jussel, P; Kabachenko, V V; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kaiser, S; Kajomovitz, E; Kalinin, S; Kalinovskaya, L V; Kama, S; Kanaya, N; Kaneda, M; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kaplon, J; Kar, D; Karagounis, M; Karagoz, M; Karnevskiy, M; Karr, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasmi, A; Kass, R D; Kastanas, A; Kastoryano, M; Kataoka, M; Kataoka, Y; Katsoufis, E; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kayl, M S; Kayumov, F; Kazanin, V A; Kazarinov, M Y; Kazi, S I; Keates, J R; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Kekelidze, G D; Kelly, M; Kennedy, J; Kenney, C J; Kenyon, M; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Ketterer, C; Khakzad, M; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Kholodenko, A G; Khomich, A; Khoriauli, G; Khovanskiy, N; Khovanskiy, V; Khramov, E; Khubua, J; Kilvington, G; Kim, H; Kim, M S; Kim, P C; Kim, S H; Kimura, N; Kind, O; Kind, P; King, B T; King, M; Kirk, J; Kirsch, G P; Kirsch, L E; Kiryunin, A E; Kisielewska, D; Kisielewski, B; Kittelmann, T; Kiver, A M; Kiyamura, H; Kladiva, E; Klaiber-Lodewigs, J; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimentov, A; Klingenberg, R; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Knecht, N S; Kneringer, E; Knobloch, J; Ko, B R; Kobayashi, T; Kobel, M; Koblitz, B; Kocian, M; Kocnar, A; Kodys, P; Köneke, K; König, A C; Koenig, S; König, S; Köpke, L; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kokott, T; Kolachev, G M; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Kollar, D; Kollefrath, M; Kolos, S; Kolya, S D; Komar, A A; Komaragiri, J R; Kondo, T; Kono, T; Kononov, A I; Konoplich, R; Konovalov, S P; Konstantinidis, N; Kootz, A; Koperny, S; Kopikov, S V; Korcyl, K; Kordas, K; Koreshev, V; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kostka, P; Kostyukhin, V V; Kotamäki, M J; Kotov, S; Kotov, V M; Kotov, K Y; Kourkoumelis, C; Koutsman, A; Kowalewski, R; Kowalski, H; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasel, O; Krasny, M W; Krasznahorkay, A; Kraus, J; Kreisel, A; Krejci, F; Kretzschmar, J; Krieger, N; Krieger, P; Krobath, G; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumshteyn, Z V; Kruth, A; Kubota, T; Kuehn, S; Kugel, A; Kuhl, T; Kuhn, D; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kummer, C; Kuna, M; Kundu, N; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurchaninov, L L; Kurochkin, Y A; Kus, V; Kuykendall, W; Kuze, M; Kuzhir, P; Kvasnicka, O; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Labbe, J; Lacasta, C; Lacava, F; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lamanna, M; Lambacher, M; Lampen, C L; Lampl, W; Lancon, E; Landgraf, U; Landon, M P J; Landsman, H; Lane, J L; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Lapin, V V; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larionov, A V; Larner, A; Lasseur, C; Lassnig, M; Lau, W; Laurelli, P; Lavorato, A; Lavrijsen, W; Laycock, P; Lazarev, A B; Lazzaro, A; Le Dortz, O; Le Guirriec, E; Le Maner, C; Le Menedeu, E; Le Vine, M; Leahu, M; Lebedev, A; Lebel, C; Lechowski, M; Lecompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lefebvre, M; Legendre, M; Leger, A; Legeyt, B C; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Lehto, M; Lei, X; Leitner, R; Lellouch, D; Lellouch, J; Leltchouk, M; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Lepidis, J; Leroy, C; Lessard, J-R; Lesser, J; Lester, C G; Leung Fook Cheong, A; Levêque, J; Levin, D; Levinson, L J; Levitski, M S; Lewandowska, M; Leyton, M; Li, H; Li, X; Liang, Z; Liang, Z; Liberti, B; Lichard, P; Lichtnecker, M; Lie, K; Liebig, W; Lifshitz, R; Lilley, J N; Lim, H; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipinsky, L; Lipniacka, A; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, C; Liu, D; Liu, H; Liu, J B; Liu, M; Liu, S; Liu, T; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Lloyd, S L; Lobodzinska, E; Loch, P; Lockman, W S; Lockwitz, S; Loddenkoetter, T; Loebinger, F K; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Loken, J; Long, R E; Lopes, L; Lopez Mateos, D; Losada, M; Loscutoff, P; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Lovas, L; Love, J; Love, P A; Lowe, A J; Lu, F; Lu, J; Lu, L; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Luijckx, G; Lumb, D; Luminari, L; Lund, E; Lund-Jensen, B; Lundberg, B; Lundberg, J; Lundquist, J; Lupi, A; Lutz, G; Lynn, D; Lynn, J; Lys, J; Lytken, E; Ma, H; Ma, L L; Maassen, M; Macana Goia, J A; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Macina, D; Mackeprang, R; Macqueen, D; Madaras, R J; Mader, W F; Maenner, R; Maeno, T; Mättig, P; Mättig, S; Magalhaes Martins, P J; Magnoni, L; Magradze, E; Magrath, C A; Mahalalel, Y; Mahboubi, K; Mahmood, A; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makouski, M; Makovec, N; Mal, P; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Maltezos, S; Malyshev, V; Malyukov, S; Mambelli, M; Mameghani, R; Mamuzic, J; Manabe, A; Manara, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Mangeard, P S; Mangin-Brinet, M; Manjavidze, I D; Mann, A; Mann, W A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Manz, A; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchesotti, M; Marchiori, G; Marcisovsky, M; Marin, A; Marino, C P; Marroquim, F; Marshall, R; Marshall, Z; Martens, F K; Marti-Garcia, S; Martin, A J; Martin, A J; Martin, B; Martin, B; Martin, F F; Martin, J P; Martin, Ph; Martin, T A; Martin Dit Latour, B; Martinez, M; Martinez Outschoorn, V; Martini, A; Martyniuk, A C; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Mass, M; Massa, I; Massaro, G; Massol, N; Mastroberardino, A; Masubuchi, T; Mathes, M; Matricon, P; Matsumoto, H; Matsunaga, H; Matsushita, T; Mattravers, C; Maugain, J M; Maxfield, S J; May, E N; Mayer, J K; Mayne, A; Mazini, R; Mazur, M; Mazzanti, M; Mazzoni, E; Mc Donald, J; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; McGarvie, S; McGlone, H; McHedlidze, G; McLaren, R A; McMahon, S J; McMahon, T R; McMahon, T J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meera-Lebbai, R; Meguro, T; Mehdiyev, R; Mehlhase, S; Mehta, A; Meier, K; Meinhardt, J; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Menot, C; Meoni, E; Merkl, D; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A M; Messmer, I; Metcalfe, J; Mete, A S; Meuser, S; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Meyer, T C; Meyer, W T; Miao, J; Michal, S; Micu, L; Middleton, R P; Miele, P; Migas, S; Migliaccio, A; Mijović, L; Mikenberg, G; Mikestikova, M; Mikulec, B; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Mima, S; Minaenko, A A; Miñano, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Miralles Verge, L; Misawa, S; Miscetti, S; Misiejuk, A; Mitra, A; Mitrevski, J; Mitrofanov, G Y; Mitsou, V A; Mitsui, S; Miyagawa, P S; Miyazaki, K; Mjörnmark, J U; Mladenov, D; Moa, T; Moch, M; Mockett, P; Moed, S; Moeller, V; Mönig, K; Möser, N; Mohn, B; Mohr, W; Mohrdieck-Möck, S; Moisseev, A M; Moles-Valls, R; Molina-Perez, J; Moneta, L; Monk, J; Monnier, E; Montesano, S; Monticelli, F; Moore, R W; Moorhead, G F; Mora Herrera, C; Moraes, A; Morais, A; Morel, J; Morello, G; Moreno, D; Moreno Llácer, M; Morettini, P; Morgan, D; Morii, M; Morin, J; Morita, Y; Morley, A K; Mornacchi, G; Morone, M-C; Morozov, S V; Morris, J D; Moser, H G; Mosidze, M; Moss, J; Moszczynski, A; Mount, R; Mountricha, E; Mouraviev, S V; Moye, T H; Moyse, E J W; Mudrinic, M; Mueller, F; Mueller, J; Mueller, K; Müller, T A; Muenstermann, D; Muijs, A; Muir, A; Munar, A; Munwes, Y; Murakami, K; Murillo Garcia, R; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nadal, J; Nagai, K; Nagano, K; Nagasaka, Y; Nairz, A M; Naito, D; Nakamura, K; Nakano, I; Nanava, G; Napier, A; Nash, M; Nasteva, I; Nation, N R; Nattermann, T; Naumann, T; Nauyock, F; Navarro, G; Nderitu, S K; Neal, H A; Nebot, E; Nechaeva, P; Negri, A; Negri, G; Negroni, S; Nelson, A; Nelson, S; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Nesterov, S Y; Neubauer, M S; Neukermans, L; Neusiedl, A; Neves, R M; Nevski, P; Newcomer, F M; Nicholson, C; Nickerson, R B; Nicolaidou, R; Nicolas, L; Nicoletti, G; Nicquevert, B; Niedercorn, F; Nielsen, J; Niinikoski, T; Nikiforov, A; Nikolaenko, V; Nikolaev, K; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nishiyama, T; Nisius, R; Nodulman, L; Nomachi, M; Nomidis, I; Nomoto, H; Nordberg, M; Nordkvist, B; Norniella Francisco, O; Norton, P R; Notz, D; Novakova, J; Nozaki, M; Nožička, M; Nugent, I M; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nyman, T; O'Neale, S W; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Oda, S; Odaka, S; Odier, J; Odino, G A; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Ohshita, H; Ohska, T K; Ohsugi, T; Okada, S; Okawa, H; Okumura, Y; Okuyama, T; Olcese, M; Olchevski, A G; Oliveira, M; Oliveira Damazio, D; Oliver, C; Oliver, J; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Omachi, C; Onofre, A; Onyisi, P U E; Oram, C J; Ordonez, G; Oreglia, M J; Orellana, F; Oren, Y; Orestano, D; Orlov, I; Oropeza Barrera, C; Orr, R S; Ortega, E O; Osculati, B; Ospanov, R; Osuna, C; Otero Y Garzon, G; Ottersbach, J P; Ottewell, B; Ouchrif, M; Ould-Saada, F; Ouraou, A; Ouyang, Q; Owen, M; Owen, S; Oyarzun, A; Oye, O K; Ozcan, V E; Ozone, K; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Paganis, E; Paige, F; Pajchel, K; Palestini, S; Palla, J; Pallin, D; Palma, A; Palmer, J D; Palmer, M J; Pan, Y B; Panagiotopoulou, E; Panes, B; Panikashvili, N; Panin, V N; Panitkin, S; Pantea, D; Panuskova, M; Paolone, V; Paoloni, A; Papadopoulou, T D; Paramonov, A; Park, S J; Park, W; Parker, M A; Parker, S I; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passeri, A; Pastore, F; Pastore, F; Pásztor, G; Pataraia, S; Patel, N; Pater, J R; Patricelli, S; Pauly, T; Peak, L S; Pecsy, M; Pedraza Morales, M I; Peeters, S J M; Peleganchuk, S V; Peng, H; Pengo, R; Penson, A; Penwell, J; Perantoni, M; Perez, K; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Peric, I; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Persembe, S; Perus, P; Peshekhonov, V D; Petereit, E; Peters, O; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Pfeifer, B; Phan, A; Phillips, A W; Piacquadio, G; Piccaro, E; Piccinini, M; Pickford, A; Piegaia, R; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinfold, J L; Ping, J; Pinto, B; Pirotte, O; Pizio, C; Placakyte, R; Plamondon, M; Plano, W G; Pleier, M-A; Pleskach, A V; Poblaguev, A; Poddar, S; Podlyski, F; Poffenberger, P; Poggioli, L; Poghosyan, T; Pohl, M; Polci, F; Polesello, G; Policicchio, A; Polini, A; Poll, J; Polychronakos, V; Pomarede, D M; Pomeroy, D; Pommès, K; Ponsot, P; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popescu, R; Popovic, D S; Poppleton, A; Popule, J; Portell Bueso, X; Porter, R; Posch, C; Pospelov, G E; Pospisil, S; Potekhin, M; Potrap, I N; Potter, C J; Potter, C T; Potter, K P; Poulard, G; Poveda, J; Prabhu, R; Pralavorio, P; Prasad, S; Prata, M; Pravahan, R; Pretzl, K; Pribyl, L; Price, D; Price, L E; Price, M J; Prichard, P M; Prieur, D; Primavera, M; Prokofiev, K; Prokoshin, F; Protopopescu, S; Proudfoot, J; Prudent, X; Przysiezniak, H; Psoroulas, S; Ptacek, E; Puigdengoles, C; Purdham, J; Purohit, M; Puzo, P; Pylypchenko, Y; Qi, M; Qian, J; Qian, W; Qian, Z; Qin, Z; Qing, D; Quadt, A; Quarrie, D R; Quayle, W B; Quinonez, F; Raas, M; Radeka, V; Radescu, V; Radics, B; Rador, T; Ragusa, F; Rahal, G; Rahimi, A M; Rahm, D; Raine, C; Raith, B; Rajagopalan, S; Rajek, S; Rammensee, M; Rammes, M; Ramstedt, M; Ratoff, P N; Rauscher, F; Rauter, E; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reichold, A; Reinherz-Aronis, E; Reinsch, A; Reisinger, I; Reljic, D; Rembser, C; Ren, Z L; Renkel, P; Rensch, B; Rescia, S; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richards, A; Richards, R A; Richter, R; Richter-Was, E; Ridel, M; Rieke, S; Rijpstra, M; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Roa Romero, D A; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robinson, M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Rodier, S; Rodriguez, D; Rodriguez Garcia, Y; Roe, S; Røhne, O; Rojo, V; Rolli, S; Romaniouk, A; Romanov, V M; Romeo, G; Romero Maltrana, D; Roos, L; Ros, E; Rosati, S; Rosenbaum, G A; Rosenberg, E I; Rosendahl, P L; Rosselet, L; Rossetti, V; Rossi, L P; Rossi, L; Rotaru, M; Rothberg, J; Rottländer, I; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Ruckert, B; Ruckstuhl, N; Rud, V I; Rudolph, G; Rühr, F; Ruggieri, F; Ruiz-Martinez, A; Rulikowska-Zarebska, E; Rumiantsev, V; Rumyantsev, L; Runge, K; Runolfsson, O; Rurikova, Z; Rusakovich, N A; Rust, D R; Rutherfoord, J P; Ruwiedel, C; Ruzicka, P; Ryabov, Y F; Ryadovikov, V; Ryan, P; Rybkin, G; Rzaeva, S; Saavedra, A F; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sala, P; Salamanna, G; Salamon, A; Saleem, M; Salihagic, D; Salnikov, A; Salt, J; Saltó Bauza, O; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Samset, B H; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandhu, P; Sandoval, T; Sandstroem, R; Sandvoss, S; Sankey, D P C; Sanny, B; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarri, F; Sartisohn, G; Sasaki, O; Sasaki, T; Sasao, N; Satsounkevitch, I; Sauvage, G; Savard, P; Savine, A Y; Savinov, V; Savva, P; Sawyer, L; Saxon, D H; Says, L P; Sbarra, C; Sbrizzi, A; Scallon, O; Scannicchio, D A; Schaarschmidt, J; Schacht, P; Schäfer, U; Schaetzel, S; Schaffer, A C; Schaile, D; Schaller, M; Schamberger, R D; Schamov, A G; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schioppa, M; Schlenker, S; Schlereth, J L; Schmidt, E; Schmidt, M P; Schmieden, K; Schmitt, C; Schmitz, M; Scholte, R C; Schöning, A; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schreiner, A; Schroeder, C; Schroer, N; Schroers, M; Schroff, D; Schuh, S; Schuler, G; Schultes, J; Schultz-Coulon, H-C; Schumacher, J W; Schumacher, M; Schumm, B A; Schune, P; Schwanenberger, C; Schwartzman, A; Schweiger, D; Schwemling, P; Schwienhorst, R; Schwierz, R; Schwindling, J; Scott, W G; Searcy, J; Sedykh, E; Segura, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Seuster, R; Severini, H; Sevior, M E; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaver, L; Shaw, C; Shaw, K; Sherman, D; Sherwood, P; Shibata, A; Shield, P; Shimizu, S; Shimojima, M; Shin, T; Shmeleva, A; Shochet, M J; Shupe, M A; Sicho, P; Sidoti, A; Siebel, A; Siebel, M; Siegert, F; Siegrist, J; Sijacki, D; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simic, Lj; Simion, S; Simmons, B; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skovpen, K; Skubic, P; Skvorodnev, N; Slater, M; Slavicek, T; Sliwa, K; Sloan, T J; Sloper, J; Smakhtin, V; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, D; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snow, S W; Snow, J; Snuverink, J; Snyder, S; Soares, M; Sobie, R; Sodomka, J; Soffer, A; Solans, C A; Solar, M; Solc, J; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Soluk, R; Sondericker, J; Soni, N; Sopko, V; Sopko, B; Sorbi, M; Sosebee, M; Soukharev, A; Spagnolo, S; Spanò, F; Speckmayer, P; Spencer, E; Spighi, R; Spigo, G; Spila, F; Spiriti, E; Spiwoks, R; Spogli, L; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahl, T; Stahlman, J; Stamen, R; Stancu, S N; Stanecka, E; Stanek, R W; Stanescu, C; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Stastny, J; Staude, A; Stavina, P; Stavropoulos, G; Steele, G; Stefanidis, E; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stevenson, K; Stewart, G A; Stiller, W; Stockmanns, T; Stockton, M C; Stodulski, M; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Strube, J; Stugu, B; Stumer, I; Sturm, P; Soh, D A; Su, D; Subramania, S; Sugaya, Y; Sugimoto, T; Suhr, C; Suita, K; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X H; Sundermann, J E; Suruliz, K; Sushkov, S; Susinno, G; Sutton, M R; Suzuki, Y; Sviridov, Yu M; Swedish, S; Sykora, I; Sykora, T; Szczygiel, R R; Szeless, B; Szymocha, T; Sánchez, J; Ta, D; Taboada Gameiro, S; Tackmann, K; Taffard, A; Tafirout, R; Taga, A; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Talby, M; Talyshev, A; Tamsett, M C; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanaka, Y; Tani, K; Tappern, G P; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tatarkhanov, M; Taylor, C; Taylor, F E; Taylor, G; Taylor, G N; Taylor, R P; Taylor, W; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Tennenbaum-Katan, Y D; Terada, S; Terashi, K; Terron, J; Terwort, M; Testa, M; Teuscher, R J; Tevlin, C M; Thadome, J; Therhaag, J; Theveneaux-Pelzer, T; Thioye, M; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, R J; Thompson, A S; Thomson, E; Thun, R P; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timmermans, C J W P; Tipton, P; Tique Aires Viegas, F J; Tisserant, S; Tobias, J; Toczek, B; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokunaga, K; Tokushuku, K; Tollefson, K; Tomasek, L; Tomasek, M; Tomoto, M; Tompkins, D; Tompkins, L; Toms, K; Tonazzo, A; Tong, G; Tonoyan, A; Topfel, C; Topilin, N D; Torchiani, I; Torrence, E; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Traynor, D; Trefzger, T; Treis, J; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Trinh, T N; Tripiana, M F; Triplett, N; Trischuk, W; Trivedi, A; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turlay, E; Tuts, P M; Twomey, M S; Tylmad, M; Tyndel, M; Typaldos, D; Tyrvainen, H; Tzamarioudaki, E; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ugland, M; Uhlenbrock, M; Uhrmacher, M; Ukegawa, F; Unal, G; Underwood, D G; Undrus, A; Unel, G; Unno, Y; Urbaniec, D; Urkovsky, E; Urquijo, P; Urrejola, P; Usai, G; Uslenghi, M; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valderanis, C; Valenta, J; Valente, P; Valentinetti, S; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; van der Graaf, H; van der Kraaij, E; van der Poel, E; van der Ster, D; Van Eijk, B; van Eldik, N; van Gemmeren, P; van Kesteren, Z; van Vulpen, I; Vandelli, W; Vandoni, G; Vaniachine, A; Vankov, P; Vannucci, F; Varela Rodriguez, F; Vari, R; Varnes, E W; Varouchas, D; Vartapetian, A; Varvell, K E; Vasilyeva, L; Vassilakopoulos, V I; Vazeille, F; Vegni, G; Veillet, J J; Vellidis, C; Veloso, F; Veness, R; Veneziano, S; Ventura, A; Ventura, D; Ventura, S; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vertogardov, L; Vetterli, M C; Vichou, I; Vickey, T; Viehhauser, G H A; Viel, S; Villa, M; Villani, E G; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virchaux, M; Viret, S; Virzi, J; Vitale, A; Vitells, O; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vlasak, M; Vlasov, N; Vogel, A; Vogt, H; Vokac, P; Volpi, M; Volpini, G; von der Schmitt, H; von Loeben, J; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobiev, A P; Vorwerk, V; Vos, M; Voss, R; Voss, T T; Vossebeld, J H; Vovenko, A S; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vudragovic, D; Vuillermet, R; Vukotic, I; Wagner, W; Wagner, P; Wahlen, H; Walbersloh, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Wang, C; Wang, H; Wang, J; Wang, J C; Wang, S M; Warburton, A; Ward, C P; Warsinsky, M; Wastie, R; Watkins, P M; Watson, A T; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Webel, M; Weber, J; Weber, M; Weber, M S; Weber, P; Weidberg, A R; Weingarten, J; Weiser, C; Wellenstein, H; Wellisch, H P; Wells, P S; Wen, M; Wenaus, T; Wendler, S; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Werthenbach, U; Wessels, M; Whalen, K; Wheeler-Ellis, S J; Whitaker, S P; White, A; White, M J; White, S; Whitehead, S R; Whiteson, D; Whittington, D; Wicek, F; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik, L A M; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Woehrling, E; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wraight, K; Wright, C; Wright, D; Wrona, B; Wu, S L; Wu, X; Wuestenfeld, J; Wulf, E; Wunstorf, R; Wynne, B M; Xaplanteris, L; Xella, S; Xie, S; Xie, Y; Xu, C; Xu, D; Xu, G; Xu, N; Yabsley, B; Yamada, M; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamaoka, J; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, S; Yang, U K; Yang, Y; Yang, Y; Yang, Z; Yao, W-M; Yao, Y; Yasu, Y; Ye, J; Ye, S; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Young, C; Youssef, S P; Yu, D; Yu, J; Yu, J; Yuan, J; Yuan, L; Yurkewicz, A; Zaets, V G; Zaidan, R; Zaitsev, A M; Zajacova, Z; Zalite, Yo K; Zambrano, V; Zanello, L; Zarzhitsky, P; Zaytsev, A; Zdrazil, M; Zeitnitz, C; Zeller, M; Zema, P F; Zemla, A; Zendler, C; Zenin, A V; Zenin, O; Zenis, T; Zenonos, Z; Zenz, S; Zerwas, D; Zevi Della Porta, G; Zhan, Z; Zhang, H; Zhang, J; Zhang, Q; Zhang, X; Zhao, L; Zhao, T; Zhao, Z; Zhemchugov, A; Zheng, S; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, Y; Zhuang, X; Zhuravlov, V; Zilka, B; Zimmermann, R; Zimmermann, S; Zimmermann, S; Ziolkowski, M; Zitoun, R; Zivković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; Zolnierowski, Y; Zsenei, A; Zur Nedden, M; Zutshi, V
2010-10-15
A search for new heavy particles manifested as resonances in two-jet final states is presented. The data were produced in 7 TeV proton-proton collisions by the LHC and correspond to an integrated luminosity of 315 nb⁻¹ collected by the ATLAS detector. No resonances were observed. Upper limits were set on the product of cross section and signal acceptance for excited-quark (q*) production as a function of q* mass. These exclude at the 95% C.L. the q* mass interval 0.30
New Fast Beam Conditions Monitoring (BCM1F) system for CMS
NASA Astrophysics Data System (ADS)
Zagozdzinska, A. A.; Bell, A. J.; Dabrowski, A. E.; Hempel, M.; Henschel, H. M.; Karacheban, O.; Przyborowski, D.; Leonard, J. L.; Penno, M.; Pozniak, K. T.; Miraglia, M.; Lange, W.; Lohmann, W.; Ryjov, V.; Lokhovitskiy, A.; Stickland, D.; Walsh, R.
2016-01-01
The CMS Beam Radiation Instrumentation and Luminosity (BRIL) project is composed of several systems providing the experiment protection from adverse beam conditions while also measuring the online luminosity and beam background. Although the readout bandwidth of the Fast Beam Conditions Monitoring system (BCM1F—one of the faster monitoring systems of the CMS BRIL), was sufficient for the initial LHC conditions, the foreseen enhancement of the beams parameters after the LHC Long Shutdown-1 (LS1) imposed the upgrade of the system. This paper presents the new BCM1F, which is designed to provide real-time fast diagnosis of beam conditions and instantaneous luminosity with readout able to resolve the 25 ns bunch structure.
First Results of an “Artificial Retina” Processor Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
NASA Astrophysics Data System (ADS)
Solovyanov, Oleg
2017-10-01
The Tile Calorimeter (TileCal) of the ATLAS experiment at the LHC is the central hadronic calorimeter designed for energy reconstruction of hadrons, jets, tauparticles and missing transverse energy. TileCal is a scintillator-steel sampling calorimeter and it covers the region of pseudo-rapidity up to 1.7, with almost 10000 channels measuring energies ranging from ˜30 MeV to ˜2 TeV. Each stage of the signal production, from scintillation light to the signal reconstruction, is monitored and calibrated. The performance of the Tile calorimeter has been studied in-situ employing cosmic ray muons and a large sample of proton-proton collisions, acquired during the operations of the LHC. Prompt isolated muons of high momentum from electroweak bosons decays are employed to study the energy response of the calorimeter at the electromagnetic scale. The calorimeter response to hadronic particles is evaluated with a sample of isolated hadrons. The modelling of the response by the Monte Carlo simulation is discussed. The calorimeter timing calibration and resolutions are studied with a sample of multijets events. Results on the calorimeter operation and performance are presented, including the calibration, stability, absolute energy scale, uniformity and time resolution. TileCal performance satisfies the design requirements and has provided an essential contribution to physics results in ATLAS. The Large Hadron Collider (LHC) has envisaged a series of upgrades towards a High Luminosity LHC (HL-LHC), delivering five times the LHC nominal instantaneous luminosity. The ATLAS Phase II upgrade, in 2024, will accommodate the detector and data acquisition system for the HL-LHC. In particular, the Tile Calorimeter will undergo a major replacement of its on- and off-detector electronics. All signals will be digitised and then transferred directly to the off-detector electronics, where the signals will be reconstructed, stored, and sent to the first level of trigger at a rate of 40 MHz. This will provide better precision for the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Changes to the electronics will also contribute to the reliability and redundancy of the system. Three different front-end options are presently being investigated for the upgrade. Results of extensive laboratory tests and with beams of the three options will be presented, as well as the latest results on the development of the power distribution and the off-detector electronics.
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
Induced radioactivity in the forward shielding and semiconductor tracker of the ATLAS detector.
Bĕdajánek, I; Linhart, V; Stekl, I; Pospísil, S; Kolros, A; Kovalenko, V
2005-01-01
The radioactivity induced in the forward shielding, copper collimator and semiconductor tracker modules of the ATLAS detector has been studied. The ATLAS detector is a long-term experiment which, during operation, will require to have service and access to all of its parts and components. The radioactivity induced in the forward shielding was calculated by Monte Carlo methods based on GEANT3 software tool. The results show that the equivalent dose rates on the outer surface of the forward shielding are very low (at most 0.038 microSv h(-1)). On the other hand, the equivalent dose rates are significantly higher on the inner surface of the forward shielding (up to 661 microSv h(-1)) and, especially, at the copper collimator close to the beampipe (up to 60 mSv h(-1)). The radioactivity induced in the semiconductor tracker modules was studied experimentally. The module was activated by neutrons in a training nuclear reactor and the delayed gamma ray spectra were measured. From these measurements, the equivalent dose rate on the surface of the semiconductor tracker module was estimated to be < 100 microSv h(-1) after 100 d of Large Hadron Collider (LHC) operation and 10 d of cooling.
Modeling of beam-induced damage of the LHC tertiary collimators
NASA Astrophysics Data System (ADS)
Quaranta, E.; Bertarelli, A.; Bruce, R.; Carra, F.; Cerutti, F.; Lechner, A.; Redaelli, S.; Skordis, E.; Gradassi, P.
2017-09-01
Modern hadron machines with high beam intensity may suffer from material damage in the case of large beam losses and even beam-intercepting devices, such as collimators, can be harmed. A systematic method to evaluate thresholds of damage owing to the impact of high energy particles is therefore crucial for safe operation and for predicting possible limitations in the overall machine performance. For this, a three-step simulation approach is presented, based on tracking simulations followed by calculations of energy deposited in the impacted material and hydrodynamic simulations to predict the thermomechanical effect of the impact. This approach is applied to metallic collimators at the CERN Large Hadron Collider (LHC), which in standard operation intercept halo protons, but risk to be damaged in the case of extraction kicker malfunction. In particular, tertiary collimators protect the aperture bottlenecks, their settings constrain the reach in β* and hence the achievable luminosity at the LHC experiments. Our calculated damage levels provide a very important input on how close to the beam these collimators can be operated without risk of damage. The results of this approach have been used already to push further the performance of the present machine. The risk of damage is even higher in the upgraded high-luminosity LHC with higher beam intensity, for which we quantify existing margins before equipment damage for the proposed baseline settings.
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Shedding light on neutrino masses with dark forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batell, Brian; Pospelov, Maxim; Shuve, Brian
Heavy right-handed neutrinos, N , provide the simplest explanation for the origin of light neutrino masses and mixings. If M N is at or below the weak scale, direct experimental discovery of these states is possible at accelerator experiments such as the LHC or new dedicated beam dump experiments; in these experiments, N decays after traversing a macroscopic distance from the collision point. The experimental sensitivity to right-handed neutrinos is significantly enhanced if there is a new “dark” gauge force connecting them to the Standard Model (SM), and detection of N can be the primary discovery mode for the newmore » dark force itself. We take the well-motivated example of a B – L gauge symmetry and analyze the sensitivity to displaced decays of N produced via the new gauge interaction in two experiments: the LHC and the proposed SHiP beam dump experiment. In the most favorable case in which the mediator can be produced on-shell and decays to right handed neutrinos (pp → X + V B–L → X + N N ), the sensitivity reach is controlled by the square of the B – L gauge coupling. Here, we demonstrate that these experiments could access neutrino parameters responsible for the observed SM neutrino masses and mixings in the most straightforward implementation of the see-saw mechanism.« less
Shedding light on neutrino masses with dark forces
Batell, Brian; Pospelov, Maxim; Shuve, Brian
2016-08-08
Heavy right-handed neutrinos, N , provide the simplest explanation for the origin of light neutrino masses and mixings. If M N is at or below the weak scale, direct experimental discovery of these states is possible at accelerator experiments such as the LHC or new dedicated beam dump experiments; in these experiments, N decays after traversing a macroscopic distance from the collision point. The experimental sensitivity to right-handed neutrinos is significantly enhanced if there is a new “dark” gauge force connecting them to the Standard Model (SM), and detection of N can be the primary discovery mode for the newmore » dark force itself. We take the well-motivated example of a B – L gauge symmetry and analyze the sensitivity to displaced decays of N produced via the new gauge interaction in two experiments: the LHC and the proposed SHiP beam dump experiment. In the most favorable case in which the mediator can be produced on-shell and decays to right handed neutrinos (pp → X + V B–L → X + N N ), the sensitivity reach is controlled by the square of the B – L gauge coupling. Here, we demonstrate that these experiments could access neutrino parameters responsible for the observed SM neutrino masses and mixings in the most straightforward implementation of the see-saw mechanism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkart, F.; Schmidt, R.; Wollmann, D.
2015-08-07
In a previous paper [Schmidt et al., Phys. Plasmas 21, 080701 (2014)], we presented the first results on beam–matter interaction experiments that were carried out at the High Radiation Materials test facility at CERN. In these experiments, extended cylindrical targets of solid copper were irradiated with beam of 440 GeV protons delivered by the Super Proton Synchrotron (SPS). The beam comprised of a large number of high intensity proton bunches, each bunch having a length of 0.5 ns with a 50 ns gap between two neighboring bunches, while the length of this entire bunch train was about 7 μs. These experiments established the existencemore » of the hydrodynamic tunneling phenomenon the first time. Detailed numerical simulations of these experiments were also carried out which were reported in detail in another paper [Tahir et al., Phys. Rev. E 90, 063112 (2014)]. Excellent agreement was found between the experimental measurements and the simulation results that validate our previous simulations done using the Large Hadron Collider (LHC) beam of 7 TeV protons [Tahir et al., Phys. Rev. Spec. Top.--Accel. Beams 15, 051003 (2012)]. According to these simulations, the range of the full LHC proton beam and the hadronic shower can be increased by more than an order of magnitude due to the hydrodynamic tunneling, compared to that of a single proton. This effect is of considerable importance for the design of machine protection system for hadron accelerators such as SPS, LHC, and Future Circular Collider. Recently, using metal cutting technology, the targets used in these experiments have been dissected into finer pieces for visual and microscopic inspection in order to establish the precise penetration depth of the protons and the corresponding hadronic shower. This, we believe will be helpful in studying the very important phenomenon of hydrodynamic tunneling in a more quantitative manner. The details of this experimental work together with a comparison with the numerical simulations are presented in this paper.« less
Deployment of IPv6-only CPU resources at WLCG sites
NASA Astrophysics Data System (ADS)
Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.
2017-10-01
The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.
Theoretical & Experimental Research in Weak, Electromagnetic & Strong Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Satyanarayan; Babu, Kaladi; Rizatdinova, Flera
The conducted research spans a wide range of topics in the theoretical, experimental and phenomenological aspects of elementary particle interactions. Theory projects involve topics in both the energy frontier and the intensity frontier. The experimental research involves energy frontier with the ATLAS Collaboration at the Large Hadron Collider (LHC). In theoretical research, novel ideas going beyond the Standard Model with strong theoretical motivations were proposed, and their experimental tests at the LHC and forthcoming neutrino facilities were outlined. These efforts fall into the following broad categories: (i) TeV scale new physics models for LHC Run 2, including left-right symmetry andmore » trinification symmetry, (ii) unification of elementary particles and forces, including the unification of gauge and Yukawa interactions, (iii) supersummetry and mechanisms of supersymmetry breaking, (iv) superworld without supersymmetry, (v) general models of extra dimensions, (vi) comparing signals of extra dimensions with those of supersymmetry, (vii) models with mirror quarks and mirror leptons at the TeV scale, (viii) models with singlet quarks and singlet Higgs and their implications for Higgs physics at the LHC, (ix) new models for the dark matter of the universe, (x) lepton flavor violation in Higgs decays, (xi) leptogenesis in radiative models of neutrino masses, (xii) light mediator models of non-standard neutrino interactions, (xiii) anomalous muon decay and short baseline neutrino anomalies, (xiv) baryogenesis linked to nucleon decay, and (xv) a new model for recently observed diboson resonance at the LHC and its other phenomenological implications. The experimental High Energy Physics group has been, and continues to be, a successful and productive contributor to the ATLAS experiment at the LHC. Members of the group performed search for gluinos decaying to stop and top quarks, new heavy gauge bosons decaying to top and bottom quarks, and vector-like quarks produced in pairs and decaying to light quarks. Members of the OSU group played a leading role in the detailed optimization studies for the future ATLAS Inner Tracker (ITk), which will be installed during the Phase-II upgrade, replacing the current tracking system. The proposed studies aim to enhance the ATLAS discovery potential in the high-luminosity LHC era. The group members have contributed to the calibration of algorithms for identifying boosted vector bosons and b-jets, which will help expand the ATLAS reach in many searches for new physics.« less
Acharya, B; Alexandre, J; Baines, S; Benes, P; Bergmann, B; Bernabéu, J; Branzas, H; Campbell, M; Caramete, L; Cecchini, S; de Montigny, M; De Roeck, A; Ellis, J R; Fairbairn, M; Felea, D; Flores, J; Frank, M; Frekers, D; Garcia, C; Hirt, A M; Janecek, J; Kalliokoski, M; Katre, A; Kim, D-W; Kinoshita, K; Korzenev, A; Lacarrère, D H; Lee, S C; Leroy, C; Lionti, A; Mamuzic, J; Margiotta, A; Mauri, N; Mavromatos, N E; Mermod, P; Mitsou, V A; Orava, R; Parker, B; Pasqualini, L; Patrizii, L; Păvălaş, G E; Pinfold, J L; Popa, V; Pozzato, M; Pospisil, S; Rajantie, A; Ruiz de Austri, R; Sahnoun, Z; Sakellariadou, M; Sarkar, S; Semenoff, G; Shaa, A; Sirri, G; Sliwa, K; Soluk, R; Spurio, M; Srivastava, Y N; Suk, M; Swain, J; Tenti, M; Togo, V; Tuszyński, J A; Vento, V; Vives, O; Vykydal, Z; Whyntie, T; Widom, A; Willems, G; Yoon, J H; Zgura, I S
2017-02-10
MoEDAL is designed to identify new physics in the form of long-lived highly ionizing particles produced in high-energy LHC collisions. Its arrays of plastic nuclear-track detectors and aluminium trapping volumes provide two independent passive detection techniques. We present here the results of a first search for magnetic monopole production in 13 TeV proton-proton collisions using the trapping technique, extending a previous publication with 8 TeV data during LHC Run 1. A total of 222 kg of MoEDAL trapping detector samples was exposed in the forward region and analyzed by searching for induced persistent currents after passage through a superconducting magnetometer. Magnetic charges exceeding half the Dirac charge are excluded in all samples and limits are placed for the first time on the production of magnetic monopoles in 13 TeV pp collisions. The search probes mass ranges previously inaccessible to collider experiments for up to five times the Dirac charge.
The GridPP DIRAC project - DIRAC for non-LHC communities
NASA Astrophysics Data System (ADS)
Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.
2015-12-01
The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Cornering pseudoscalar-mediated dark matter with the LHC and cosmology
NASA Astrophysics Data System (ADS)
Banerjee, Shankha; Barducci, Daniele; Bélanger, Geneviève; Fuks, Benjamin; Goudelis, Andreas; Zaldivar, Bryan
2017-07-01
Models in which dark matter particles communicate with the visible sector through a pseudoscalar mediator are well-motivated both from a theoretical and from a phenomenological standpoint. With direct detection bounds being typically subleading in such scenarios, the main constraints stem either from collider searches for dark matter, or from indirect detection experiments. However, LHC searches for the mediator particles themselves can not only compete with — or even supersede — the reach of direct collider dark matter probes, but they can also test scenarios in which traditional monojet searches become irrelevant, especially when the mediator cannot decay on-shell into dark matter particles or its decay is suppressed. In this work we perform a detailed analysis of a pseudoscalar-mediated dark matter simplified model, taking into account a large set of collider constraints and concentrating on the parameter space regions favoured by cos-mological and astrophysical data. We find that mediator masses above 100-200 GeV are essentially excluded by LHC searches in the case of large couplings to the top quark, while forthcoming collider and astrophysical measurements will further constrain the available parameter space.
Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC
NASA Astrophysics Data System (ADS)
Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.
2017-07-01
The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.
The Shape and Flow of Heavy Ion Collisions (490th Brookhaven Lecture)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schenke, Bjoern
2014-12-18
The sun can’t do it, but colossal machines like the Relativistic Heavy Ion Collider (RHIC) at Brookhaven Lab and Large Hadron Collider (LHC) in Europe sure can. Quarks and gluons make up protons and neutrons found in the nucleus of every atom in the universe. At heavy ion colliders like RHIC and the LHC, scientists can create matter more than 100,000 times hotter than the center of the sun—so hot that protons and neutrons melt into a plasma of quarks and gluons. The particle collisions and emerging quark-gluon plasma hold keys to understanding how these fundamental particles interact with eachmore » other, which helps explain how everything is held together—from atomic nuclei to human beings to the biggest stars—how all matter has mass, and what the universe looked like microseconds after the Big Bang. Dr. Schenke discusses theory that details the shape and structure of heavy ion collisions. He will also explain how this theory and data from experiments at RHIC and the LHC are being used to determine properties of the quark-gluon plasma.« less
Darr, Sylvia C.; Arntzen, Charles J.
1986-01-01
Conditions were developed to isolate the light-harvesting chlorophyll-protein complex serving photosystem II (LHC-II) using a dialyzable detergent, octylpolyoxyethylene. This LHC-II was successfully reconstituted into partially developed chloroplast thylakoids of Hordeum vulgare var Morex (barley) seedlings which were deficient in LHC-II. Functional association of LHC-II with the photosystem II (PSII) core complex was measured by two independent functional assays of PSII sensitization by LHC-II. A 3-fold excess of reconstituted LHC-II was required to equal the activity of LHC developing in vivo. We suggest that a linker component may be absent in the partially developed membranes which is required for specific association of the PSII core complex and LHC-II. Images Fig. 1 PMID:16664744
Gamma-jet physics with the electro-magnetic calorimeter in the ALICE experiment at LHC
NASA Astrophysics Data System (ADS)
Bourdaud, G.
2008-05-01
The Electro-Magnetic Calorimeter (EMCal) will be fully installed for the first LHC heavy ion beam in order to improve the ALICE experiment performances in detection of high transverse momentum particles and in particular in reconstruction of γ-jet events. These events appear to be very interesting to probe the strongly interacting matter created in ultra-relativistic heavy ion collisions and the eventual Quark Gluon Plasma (QGP) state. Indeed, they may give information on the degree of medium opacity which induces the jet-quenching phenomenon: measuring the energy of the γ and comparing it to that of the associated jet may provide a unique way to quantify the jet energy loss in the dense matter. The interest of γ-jet studies in the framework of the quark gluon plasma physics will be discussed. A particular highlight will be stressed on the EMCal calorimeter. The detection of the γ-jet events will be then presented using this new ALICE detector.
Heavy flavour production in proton-lead and lead-lead collisions with LHCb
NASA Astrophysics Data System (ADS)
Winn, Michael
2017-11-01
The LHCb experiment offers the unique opportunity to study heavy-ion interactions in the forward region (2 < η < 5), in a kinematic domain complementary to the other 3 large experiments at the LHC. The detector has excellent capabilities for reconstructing quarkonia and open charm states, including baryons, down to zero pT. It can separate the prompt and displaced charm components. In pPb collisions, both forward and backward rapidities are covered thanks to the possibility of beam reversal. Results include measurements of the nuclear modification factor and forward-backward ratio for charmonium, open charm and bottomonium states. These quantities are sensitive probes for nuclear effects in heavy flavour production. Perspectives are given with the large accumulated luminosity during the 2016 pPb run at the LHC. In 2015, LHCb participated successfully for the first time in the PbPb data-taking. The status of the forward prompt J/ψ nuclear modification factor measurement in lead-lead collisions is discussed.
NASA Astrophysics Data System (ADS)
Maurice, Émilie; LHCb Collaboration
2017-11-01
Open and hidden charm production in nucleus-nucleus collisions is considered as a key probe of Quark Gluon Plasma (QGP) formation. In the search of specific QGP effects, proton-nucleus collisions are used as the reference as they account for the corresponding Cold Nuclear Matter (CNM) effects. The LHCb experiment, thanks to its System for Measuring Overlap with Gas (SMOG) can be operated in a fixed target mode with the LHC beams, at an intermediate center-of-mass energy between nominal SPS and RHIC energies. In 2015, for the first time, reactions of incident LHC proton beams on noble gas targets have been recorded by the LHCb experiment at a center-of-mass energy of 110 GeV and within the center-of-mass rapidity range - 2.77
Heavy neutral leptons at FASER
NASA Astrophysics Data System (ADS)
Kling, Felix; Trojanowski, Sebastian
2018-05-01
We study the prospects for discovering heavy neutral leptons at Forward Search Experiment (FASER), the newly proposed detector at the LHC. Previous studies showed that a relatively small detector with ˜10 m length and ≲1 m2 cross sectional area can probe large unconstrained parts of parameter space for dark photons and dark Higgs bosons. In this work, we show that FASER will also be sensitive to heavy neutral leptons that have mixing angles with the active neutrinos that are up to an order of magnitude lower than current bounds. In particular, this is true for heavy neutral leptons produced dominantly in B -meson decays, in which case FASER's discovery potential is comparable to the proposed SHiP detector. We also illustrate how the search for heavy neutral leptons at FASER will be complementary to ongoing searches in high-pT experiments at the LHC and can shed light on the nature of dark matter and the process of baryogenesis in the early Universe.
RGLite, an interface between ROOT and gLite—proof on the grid
NASA Astrophysics Data System (ADS)
Malzacher, P.; Manafov, A.; Schwarz, K.
2008-07-01
Using the gLitePROOF package it is possible to perform PROOF-based distributed data analysis on the gLite Grid. The LHC experiments managed to run globally distributed Monte Carlo productions on the Grid, now the development of tools for data analysis is in the foreground. To grant access interfaces must be provided. The ROOT/PROOF framework is used as a starting point. Using abstract ROOT classes (TGrid, ...) interfaces can be implemented, via which Grid access from ROOT can be accomplished. A concrete implementation exists for the ALICE Grid environment AliEn. Within the D-Grid project an interface to the common Grid middleware of all LHC experiments, gLite, has been created. Therefore it is possible to query Grid File Catalogues from ROOT for the location of the data to be analysed. Grid jobs can be submitted into a gLite based Grid. The status of the jobs can be asked for, and their results can be obtained.
Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bawej, Tomasz; et al.
2014-01-01
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities.more » During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.« less
The DAQ needle in the big-data haystack
NASA Astrophysics Data System (ADS)
Meschi, E.
2015-12-01
In the last three decades, HEP experiments have faced the challenge of manipulating larger and larger masses of data from increasingly complex, heterogeneous detectors with millions and then tens of millions of electronic channels. LHC experiments abandoned the monolithic architectures of the nineties in favor of a distributed approach, leveraging the appearence of high speed switched networks developed for digital telecommunication and the internet, and the corresponding increase of memory bandwidth available in off-the-shelf consumer equipment. This led to a generation of experiments where custom electronics triggers, analysing coarser-granularity “fast” data, are confined to the first phase of selection, where predictable latency and real time processing for a modest initial rate reduction are “a necessary evil”. Ever more sophisticated algorithms are projected for use in HL- LHC upgrades, using tracker data in the low-level selection in high multiplicity environments, and requiring extremely complex data interconnects. These systems are quickly obsolete and inflexible but must nonetheless survive and be maintained across the extremely long life span of current detectors. New high-bandwidth bidirectional links could make high-speed low-power full readout at the crossing rate a possibility already in the next decade. At the same time, massively parallel and distributed analysis of unstructured data produced by loosely connected, “intelligent” sources has become ubiquitous in commercial applications, while the mass of persistent data produced by e.g. the LHC experiments has made multiple pass, systematic, end-to-end offline processing increasingly burdensome. A possible evolution of DAQ and trigger architectures could lead to detectors with extremely deep asynchronous or even virtual pipelines, where data streams from the various detector channels are analysed and indexed in situ quasi-real-time using intelligent, pattern-driven data organization, and the final selection is operated as a distributed “search for interesting event parts”. A holistic approach is required to study the potential impact of these different developments on the design of detector readout, trigger and data acquisition systems in the next decades.
Building a Prototype of LHC Analysis Oriented Computing Centers
NASA Astrophysics Data System (ADS)
Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.
2012-12-01
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.
MoEDAL - a new light on the high-energy frontier
NASA Astrophysics Data System (ADS)
Fairbairn, Malcolm; Pinfold, James L.
2017-01-01
In 2010, the MoEDAL (MOnopole and Exotics Detector at the LHC) experiment at the Large Hadron Collider (LHC) was unanimously approved by European Centre for Nuclear Research's Research Board to start data taking in 2015. MoEDAL is a pioneering experiment designed to search for highly ionising manifestations of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles. Its groundbreaking physics programme defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; does magnetic charge exist; what is the nature of dark matter; and, how did the Big Bang develop. MoEDAL's purpose is to meet such far-reaching challenges at the frontier of the field. The innovative MoEDAL detector employs unconventional methodologies tuned to the prospect of discovery physics. The largely passive MoEDAL detector, deployed at Point 8 on the LHC ring, has a dual nature. First, it acts like a giant camera, comprised of nuclear track detectors - analysed offline by ultra fast scanning microscopes - sensitive only to new physics. Second, it is uniquely able to trap the particle messengers of physics beyond the Standard Model for further study. MoEDAL's radiation environment is monitored by a state-of-the-art real-time TimePix pixel detector array. A new MoEDAL sub-detector designed to extend MoEDAL reach to mini-charged, minimally ionising particles is under study.
Lincoln, Don
2018-01-16
With the discovery of what looks to be the Higgs boson, LHC researchers are turning their attention to the next big question, which is the predicted mass of the newly discovered particles. When the effects of quantum mechanics is taken into account, the mass of the Higgs boson should be incredibly high...perhaps upwards of a quadrillion times higher than what was observed. In this video, Fermilab's Dr. Don Lincoln explains how it is that the theory predicts that the mass is so large and gives at least one possible theoretical idea that might solve the problem. Whether the proposed idea is the answer or not, this question must be answered by experiments at the LHC or today's entire theoretical paradigm could be in jeopardy.
Spin-0± portal induced Dark Matter
NASA Astrophysics Data System (ADS)
Dutta, Sukanta; Goyal, Ashok; Saini, Lalit Kumar
2018-02-01
Standard model (SM) spin-zero singlets are constrained through their di-Bosonic decay channels via an effective coupling induced by a vector-like quark (VLQ) loop at the LHC for √{s}=13 TeV. These spin-zero resonances are then considered as portals for scalar, vector or fermionic dark matter particle interactions with SM gauge bosons. We find that the model is validated with respect to the observations from LHC data and from cosmology, indirect and direct detection experiments for an appreciable range of scalar, vector and fermionic DM masses greater than 300 GeV and VLQ masses ≥ 400 GeV, corresponding to the three choice of portal masses 270 GeV, 500 GeV and 750 GeV respectively.
Track reconstruction at LHC as a collaborative data challenge use case with RAMP
NASA Astrophysics Data System (ADS)
Amrouche, Sabrina; Braun, Nils; Calafiura, Paolo; Farrell, Steven; Gemmler, Jochen; Germain, Cécile; Gligorov, Vladimir Vava; Golling, Tobias; Gray, Heather; Guyon, Isabelle; Hushchyn, Mikhail; Innocente, Vincenzo; Kégl, Balázs; Neuhaus, Sara; Rousseau, David; Salzburger, Andreas; Ustyuzhanin, Andrei; Vlimant, Jean-Roch; Wessel, Christian; Yilmaz, Yetkin
2017-08-01
Charged particle track reconstruction is a major component of data-processing in high-energy physics experiments such as those at the Large Hadron Collider (LHC), and is foreseen to become more and more challenging with higher collision rates. A simplified two-dimensional version of the track reconstruction problem is set up on a collaborative platform, RAMP, in order for the developers to prototype and test new ideas. A small-scale competition was held during the Connecting The Dots / Intelligent Trackers 2017 (CTDWIT 2017) workshop. Despite the short time scale, a number of different approaches have been developed and compared along a single score metric, which was kept generic enough to accommodate a summarized performance in terms of both efficiency and fake rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
2014-04-28
With the discovery of what looks to be the Higgs boson, LHC researchers are turning their attention to the next big question, which is the predicted mass of the newly discovered particles. When the effects of quantum mechanics is taken into account, the mass of the Higgs boson should be incredibly high...perhaps upwards of a quadrillion times higher than what was observed. In this video, Fermilab's Dr. Don Lincoln explains how it is that the theory predicts that the mass is so large and gives at least one possible theoretical idea that might solve the problem. Whether the proposed ideamore » is the answer or not, this question must be answered by experiments at the LHC or today's entire theoretical paradigm could be in jeopardy.« less
Improving LHC searches for dark photons using lepton-jet substructure
NASA Astrophysics Data System (ADS)
Barello, G.; Chang, Spencer; Newby, Christopher A.; Ostdiek, Bryan
2017-03-01
Collider signals of dark photons are an exciting probe for new gauge forces and are characterized by events with boosted lepton jets. Existing techniques are efficient in searching for muonic lepton jets but due to substantial backgrounds have difficulty constraining lepton jets containing only electrons. This is unfortunate since upcoming intensity frontier experiments are sensitive to dark photon masses which only allow electron decays. Analyzing a recently proposed model of kinetic mixing, with new scalar particles decaying into dark photons, we find that existing techniques for electron jets can be substantially improved. We show that using lepton-jet-substructure variables, in association with a boosted decision tree, improves background rejection, significantly increasing the LHC's reach for dark photons in this region of parameter space.
The Electronic Logbook for the Information Storage of ATLAS Experiment at LHC (ELisA)
NASA Astrophysics Data System (ADS)
Corso Radu, A.; Lehmann Miotto, G.; Magnoni, L.
2012-12-01
A large experiment like ATLAS at LHC (CERN), with over three thousand members and a shift crew of 15 people running the experiment 24/7, needs an easy and reliable tool to gather all the information concerning the experiment development, installation, deployment and exploitation over its lifetime. With the increasing number of users and the accumulation of stored information since the experiment start-up, the electronic logbook actually in use, ATLOG, started to show its limitations in terms of speed and usability. Its monolithic architecture makes the maintenance and implementation of new functionality a hard-to-almost-impossible process. A new tool ELisA has been developed to replace the existing ATLOG. It is based on modern web technologies: the Spring framework using a Model-View-Controller architecture was chosen, thus helping building flexible and easy to maintain applications. The new tool implements all features of the old electronic logbook with increased performance and better graphics: it uses the same database back-end for portability reasons. In addition, several new requirements have been accommodated which could not be implemented in ATLOG. This paper describes the architecture, implementation and performance of ELisA, with particular emphasis on the choices that allowed having a scalable and very fast system and on the aspects that could be re-used in different contexts to build a similar application.
Precision searches in dijets at the HL-LHC and HE-LHC
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; Childers, J. T.; Proudfoot, J.; Wang, R.; Frizzell, D.
2018-05-01
This paper explores the physics reach of the High-Luminosity Large Hadron Collider (HL-LHC) for searches of new particles decaying to two jets. We discuss inclusive searches in dijets and b-jets, as well as searches in semi-inclusive events by requiring an additional lepton that increases sensitivity to different aspects of the underlying processes. We discuss the expected exclusion limits for generic models predicting new massive particles that result in resonant structures in the dijet mass. Prospects of the Higher-Energy LHC (HE-LHC) collider are also discussed. The study is based on the Pythia8 Monte Carlo generator using representative event statistics for the HL-LHC and HE-LHC running conditions. The event samples were created using supercomputers at NERSC.
NASA Astrophysics Data System (ADS)
Casas, Juan; Jelen, Dorota; Trikoupis, Nikolaos
2017-02-01
The monitoring of cryogenic facilities often require the measurement of pressure in the sub 5’000 Pa range that are used for flow metering applications, for saturated superfluid helium, etc. The pressure measurement is based on the minute displacement of a sensing diaphragm often through contactless techniques by using capacitive or inductive methods. The LHC radiation environment forbid the use of standard commercial sensors because of the embedded electronics that are affected both by radiation induced drift and transient Single Event Effects (SEE). Passive pressure sensors from two manufacturers were investigated and a CERN designed radiation-tolerant electronics has been developed for measuring variable-reluctance sensors. During the last maintenance stop of the LHC accelerator, four absolute pressure sensors were installed in some of the low pressure bayonet heat exchangers and four differential pressure sensors on the venturi flowmeters that monitor the cooling flow of the 20.5 kA current leads of the ATLAS end-cap superconducting toroids. The pressure sensors operating range is about 1000 to 5000 Pa and the targeted uncertainty is +/- 50 Pa which would permit to measure the equivalent saturation temperature at 1.8 K within better than 0.01 K. This paper describes the radiation hard measuring head that is based on an inductive bridge, its associated radiation-tolerant electronics that is installed under the LHC superconducting magnets or the ATLAS detector cavern; and the first operational experience.
Upgrade of Tile Calorimeter of the ATLAS Detector for the High Luminosity LHC.
NASA Astrophysics Data System (ADS)
Valdes Santurio, Eduardo; Tile Calorimeter System, ATLAS
2017-11-01
The Tile Calorimeter (TileCal) is the hadronic calorimeter of ATLAS covering the central region of the ATLAS experiment. TileCal is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are read out by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The High Luminosity Large Hadron Collider (HL-LHC) will have a peak luminosity of 5 × 1034 cm -2 s -1, five times higher than the design luminosity of the LHC. TileCal will undergo a major replacement of its on- and off-detector electronics for the high luminosity programme of the LHC in 2026. The calorimeter signals will be digitized and sent directly to the off-detector electronics, where the signals are reconstructed and shipped to the first level of trigger at a rate of 40 MHz. This will provide a better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Three different options are presently being investigated for the front-end electronic upgrade. Extensive test beam studies will determine which option will be selected. Field Programmable Gate Arrays (FPGAs) are extensively used for the logic functions of the off- and on-detector electronics. One hybrid demonstrator prototype module with the new calorimeter module electronics, but still compatible with the present system, may be inserted in ATLAS at the end of 2016.
GPU/MIC Acceleration of the LHC High Level Trigger to Extend the Physics Reach at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halyo, Valerie; Tully, Christopher
The quest for rare new physics phenomena leads the PI [3] to propose evaluation of coprocessors based on Graphics Processing Units (GPUs) and the Intel Many Integrated Core (MIC) architecture for integration into the trigger system at LHC. This will require development of a new massively parallel implementation of the well known Combinatorial Track Finder which uses the Kalman Filter to accelerate processing of data from the silicon pixel and microstrip detectors and reconstruct the trajectory of all charged particles down to momentums of 100 MeV. It is expected to run at least one order of magnitude faster than anmore » equivalent algorithm on a quad core CPU for extreme pileup scenarios of 100 interactions per bunch crossing. The new tracking algorithms will be developed and optimized separately on the GPU and Intel MIC and then evaluated against each other for performance and power efficiency. The results will be used to project the cost of the proposed hardware architectures for the HLT server farm, taking into account the long term projections of the main vendors in the market (AMD, Intel, and NVIDIA) over the next 10 years. Extensive experience and familiarity of the PI with the LHC tracker and trigger requirements led to the development of a complementary tracking algorithm that is described in [arxiv: 1305.4855], [arxiv: 1309.6275] and preliminary results accepted to JINST.« less
Role of multiparton interactions on J /ψ production in p +p collisions at LHC energies
NASA Astrophysics Data System (ADS)
Thakur, Dhananjaya; De, Sudipan; Sahoo, Raghunath; Dansana, Soumya
2018-05-01
The production mechanism of quarkonia states in hadronic collisions is still to be understood by the scientific community. In high-multiplicity p +p collisions, underlying event observables are of major interest. The multiparton interactions (MPIs) are underlying event observables, in which several interactions occur at the partonic level in a single p +p event. This leads to dependence of particle production on event multiplicity. If the MPI occurs in a harder scale, there will be a correlation between the yield of quarkonia and total charged-particle multiplicity. The ALICE experiment at the LHC in p +p collisions at √{s }=7 and 13 TeV has observed an approximate linear increase of relative J /ψ yield, (d/NJ /ψ/d y ⟨d NJ /ψ/d y ⟩ ), with relative charged-particle multiplicity density, (d/Nch/d y ⟨d Nch/d y ⟩ ). In our present work, we have performed a comprehensive study of the production of charmonia as a function of charged-particle multiplicity in p +p collisions at LHC energies using the perturbative QCD-inspired multiparton interaction model, pythia8 tune 4C, with and without the color reconnection scheme. A detailed multiplicity and energy-dependent study is performed to understand the effects of MPI on J /ψ production. The ratio of ψ (2 S ) to J /ψ is also studied as a function of charged-particle multiplicity at LHC energies.
Commissioning and initial experience with the ALICE on-line
NASA Astrophysics Data System (ADS)
Altini, V.; Anticic, T.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Kiss, T.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soós, C.; Vande Vyvre, P.; von Haller, B.; ALICE Collaboration
2010-04-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A large bandwidth and flexible Data Acquisition System (DAQ) has been designed and deployed to collect sufficient statistics in the short running time available per year for heavy ions and to accommodate very different requirements originated from the 18 sub-detectors. This paper will present the large scale tests conducted to assess the standalone DAQ performances, the interfaces with the other online systems and the extensive commissioning performed in order to be fully prepared for physics data taking. It will review the experience accumulated since May 2007 during the standalone commissioning of the main detectors and the global cosmic runs and the lessons learned from this exposure on the "battle field". It will also discuss the test protocol followed to integrate and validate each sub-detector with the online systems and it will conclude with the first results of the LHC injection tests and startup in September 2008. Several papers of the same conference present in more details some elements of the ALICE DAQ system.
Materials Processing in Magnetic Fields
NASA Astrophysics Data System (ADS)
Schneider-Muntau, Hans J.; Wada, Hitoshi
The latest in lattice QCD -- Quark-gluon plasma physics -- String theory and exact results in quantum field theory -- The status of local supersymmetry.Supersymmetry in nuclei -- Inflation, dark matter, dark energy -- How many dimensions are really compactified? -- Horizons -- Neutrino oscillations physics -- Fundamental constants and their possible time dependence.Highlights from BNL. new phenomena at RHIC -- Highlights from BABAR -- Diffraction studied with a hard scale at HERA -- The large hadron collider: a status report -- Status of non-LHC experiments at CERN -- Highlights from Gran Sass.Fast automatic systems for nuclear emulsion scanning: technique and experiments -- Probing the QGP with charm at ALICE-LHC -- magnetic screening length in hot QCD -- Non-supersymmetric deformation of the Klebanov-Strassler model and the related plane wave theory -- Holographic renormalization made simple: an example -- The kamLAND impact on neutrino oscillations -- Particle identification with the ALIC TOF detector at very high multiplicity -- Superpotentials of N = 1 SUSY gauge theories -- Measurement of the proton structure function F2 in QED compton scattering at HERA -- Yang-Mills effective action at high temperature -- The time of flight (TOF) system of the ALICE experiment -- Almost product manifolds as the low energy geometry of Dirichlet Brane.
PREFACE: Quark Matter 2011 (QM11) Quark Matter 2011 (QM11)
NASA Astrophysics Data System (ADS)
Schutz, Yves; Wiedemann, Urs Achim
2011-12-01
Since the early 1980s, the Quark Matter conferences have been the most important forum for presenting results in the field of high-energy heavy-ion collisions. The 22nd conference in this series took place in Annecy, France, on 22-29 May 2011, and it attracted a record attendance of almost 800 participants. More than 500 requests to give presentations were received and, based on the recommendations of the International Advisory Committee, almost 200 were selected. This special issue of Journal of Physics G: Nuclear and Particle Physics contains the written reports of those oral presentations. Quark Matter 2011 was scheduled to take place six months after the start of the heavy ion program at the Large Hadron Collider (LHC). Hence, these proceedings mark a historical milestone: two decades after starting to prepare for the LHC, the present volume documents the first substantial harvest of LHC heavy-ion data. In addition, these proceedings feature a complete overview of recent theoretical and experimental developments over two orders of magnitude in the center-of-mass energy of heavy-ion collisions. In particular, they include prominently the latest results from the heavy-ion experiments at Brookhaven's Relativistic Heavy Ion Collider and a broad range of theoretical highlights. Early in the organization of Quark Matter 2011, it was recognized that the novelty of the results expected at this conference argues for a very rapid publication of the proceedings. We would like to thank all who helped meet the ambitious production schedule. In particular, we would like to thank the paper committees of the LHC experiments ATLAS, ALICE and CMS, and the RHIC experiments PHENIX and STAR who ensured, in a coordinated action, that all experimental contributions were received within four weeks of the end of the conference. We would also like to thank the many individual contributors, as well as the anonymous referees appointed by Journal of Physics G: Nuclear and Particle Physics, who respected the tight deadline. Last but not least, we would like to thank the staff of the journal, and in particular Suzie Prescott and Rachel Lawless: they handled an enormous number of communications and requests flawlessly and swiftly. Yves Schutz and Urs Achim Wiedemann Organizers of Quark Matter 2011 Guest Editors
Review of hydrodynamic tunneling issues in high power particle accelerators
NASA Astrophysics Data System (ADS)
Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Piriz, A. R.
2018-07-01
Full impact of one Large Hadron Collider (LHC) 7 TeV proton beam on solid targets made of different materials including copper and carbon, was simulated using an energy deposition code, FLUKA and a two-dimensional hydrodynamic code, BIG2, iteratively. These studies showed that the penetration depth of the entire beam comprised of 2808 proton bunches significantly increases due to a phenomenon named hydrodynamic tunneling of the protons and the shower. For example, the static range of a single 7 TeV proton and its shower is about 1 m in solid copper, but the full LHC beam will penetrate up to about 35 m in the target, if the hydrodynamic effects were included. Due to the potential implications of this result on the machine protection considerations, it was decided to have an experimental verification of the hydrodynamic tunneling effect. For this purpose, experiments were carried out at the CERN HiRadMat (High Radiation to Materials) facility in which extended solid copper cylindrical targets were irradiated with the 440 GeV proton beam generated by the Super Proton Synchrotron (SPS). Simulations of beam-target heating considering the same beam parameters that were used in the experiments, were also performed. These experiments not only confirmed the existence of the hydrodynamic tunneling, but the experimental measurements showed very good agreement with the experimental results as well. This provided confidence in the work on LHC related beam-matter heating simulations. Currently, a design study is being carried out by the international community (with CERN taking the leading role) for a post LHC collider named, the Future Circular Collider (FCC) which will accelerate two counter rotating proton beams up to a particle energy of 50 TeV. Simulations of the full impact of one FCC beam comprised of 10,600 proton bunches with a solid copper target have also been done. These simulations have shown that although the static range of a single 50 TeV proton and its shower in solid copper is around 1.8 m, the entire beam will penetrate up to about 350 m in the target. Feasibility studies of developing a water beam dump for the FCC have also been carried out. A review of this work and its implications on machine protection system are presented in this paper.
Proposal to search for mu- N -> e- N with a single event sensitivity below 10 -16
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carey, R.M.; Lynch, K.R.; Miller, J.P.
2008-10-01
We propose a new experiment, Mu2e, to search for charged lepton flavor violation with unprecedented sensitivity. We will measure the ratio of the coherent neutrinoless conversion in the field of a nucleus of a negatively charged muon into an electron to the muon capture process: R{sub {mu}e} = {mu}{sup -} + A(Z,N) {yields} e{sup -} + A(Z,N)/{mu}{sup -} + A(Z,N) {yields} {nu}{sub {mu}} + A(Z-1, N), with a sensitivity R{sub {mu}e} {le} 6 x 10{sup -17} at 90% CL. This is almost a four order-of-magnitude improvement over the existing limit. The observation of such a process would be unambiguous evidencemore » of physics beyond the Standard Model. Since the discovery of the muon in 1936, physicists have attempted to answer I.I. Rabi's famous question: 'Who ordered that?' Why is there a muon? What role does it play in the larger questions of why there are three families and flavors of quarks, leptons, and neutrinos? We know quarks mix through a mechanism described by the Cabbibo-Kobayashi-Maskawa matrix, which has been studied for forty years. Neutrino mixing has been observed in the last decade, but mixing among the family of charged leptons has never been seen. The current limits are of order 10{sup -11} - 10{sup -13} so the process is rare indeed. Why is such an experiment important and timely? A major motivation for experiments at the Large Hadron Collider (LHC) is the possible observation of supersymmetric particles in the TeV mass range. Many of these supersymmetric models predict a {mu}-e conversion signal at R{sub {mu}e} {approx} 10{sup -15}. We propose to search for {mu}-e conversion at a sensitivity that exceeds this by more than an order of magnitude. The LHC may not be able to conclusively distinguish among supersymmetric models, so Mu2e will provide invaluable information should the LHC observe a signal. In the case where the LHC finds no evidence of supersymmetry, or other beyond-the-standard-model physics, Mu2e will probe for new physics at mass scales up to 10{sup 4} TeV, far beyond the reach of any planned accelerator.« less
news For the media Particle Physics Neutrinos Fermilab and the LHC Dark matter and dark energy ADMX discoveries Questions for the universe Ask a scientist Tevatron Tevatron Timeline Tevatron accelerator Tevatron experiments Tevatron operation Shutdown process For the media Video of shutdown event Guest book
Heavy flavor results at RHIC - A comparative overview
Dong, Xin
2012-01-01
I review the latest heavy flavor measurements at RHIC experiments. Measurements from RHIC together with preliminary results from LHC offer us an opportunity to systematically study the sQGP medium properties. In the end, I will outlook a prospective future on precision heavy flavor measurements with detector upgrades at RHIC.
Closing the loop on improvement: Packaging experience in the Software Engineering Laboratory
NASA Technical Reports Server (NTRS)
Waligora, Sharon R.; Landis, Linda C.; Doland, Jerry T.
1994-01-01
As part of its award-winning software process improvement program, the Software Engineering Laboratory (SEL) has developed an effective method for packaging organizational best practices based on real project experience into useful handbooks and training courses. This paper shares the SEL's experience over the past 12 years creating and updating software process handbooks and training courses. It provides cost models and guidelines for successful experience packaging derived from SEL experience.
ALICE HLT Run 2 performance overview.
NASA Astrophysics Data System (ADS)
Krzewicki, Mikolaj; Lindenstruth, Volker;
2017-10-01
For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.
Using the CMS threaded framework in a production environment
Jones, C. D.; Contreras, L.; Gartung, P.; ...
2015-12-23
During 2014, the CMS Offline and Computing Organization completed the necessary changes to use the CMS threaded framework in the full production environment. We will briefly discuss the design of the CMS Threaded Framework, in particular how the design affects scaling performance. We will then cover the effort involved in getting both the CMSSW application software and the workflow management system ready for using multiple threads for production. Finally, we will present metrics on the performance of the application and workflow system as well as the difficulties which were uncovered. As a result, we will end with CMS' plans formore » using the threaded framework to do production for LHC Run 2.« less
Electron efficiency measurements with the ATLAS detector using 2012 LHC proton–proton collision data
Aaboud, M.; Aad, G.; Abbott, B.; ...
2017-03-27
This paper describes the algorithms for the reconstruction and identification of electrons in the central region of the ATLAS detector at the Large Hadron Collider (LHC). These algorithms were used for all ATLAS results with electrons in the final state that are based on the 2012 pp collision data produced by the LHC at s = 8 TeV. The efficiency of these algorithms, together with the charge misidentification rate, is measured in data and evaluated in simulated samples using electrons from Z→ ee, Z→ eeγ and J/ ψ→ ee decays. For these efficiency measurements, the full recorded data set, corresponding tomore » an integrated luminosity of 20.3 fb - 1 , is used. Based on a new reconstruction algorithm used in 2012, the electron reconstruction efficiency is 97% for electrons with E T = 15 GeV and 99% at E T = 50 GeV. Combining this with the efficiency of additional selection criteria to reject electrons from background processes or misidentified hadrons, the efficiency to reconstruct and identify electrons at the ATLAS experiment varies from 65 to 95%, depending on the transverse momentum of the electron and background rejection.« less
NASA Astrophysics Data System (ADS)
Affolder, Anthony; Allport, Phil; Casse, Gianluigi
2010-11-01
The planned luminosity upgrade of the Large Hadron Collider at CERN (Super-LHC) will provide a challenging environment for the tracking and vertexing detector systems. Planar, segmented silicon detectors are one of the few radiation tolerant technologies under consideration for use for the Super-LHC tracking detectors in either pixel or strip geometries. In this paper, charge collection measurements are made with planar silicon sensors with 2 different substrate materials (float zone and magnetic Czochralski) and 3 different diode configurations (p+ strip in n-bulk, n+ strip in n-bulk, and n+ strip in p-bulk). For the first time, a comparison of the charge collection of these devices will be made after irradiation up to 6 ×1014 neq cm-2 with 280 MeV charged pions, and up to 2.2 ×1016 neq cm-2 with 26 MeV protons. This study covers the expected range of final fluences for the different layers of pixel and microstrip sensors of the ATLAS and CMS experiments at the Super-LHC. These measurements have been carried out using analogue, high-speed (40 MHz) electronics and a Strontium-90 beta source.
Development of an abort gap monitor for the large hadroncollider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beche, J.-F.; Byrd, J.; De Santis, S.
2004-07-01
The Large Hadron Collider (LHC), presently under construction at CERN, requires monitoring the parasitic charge in the 3.3ms long gap in the machine fill structure. This gap, referred to as the abort gap, corresponds to the raise time of the abort kickers magnets. Any circulating particle present in the abort gap at the time of the kickers firing is lost inside the ring, rather than in the beam dump, and can potentially damage a number of the LHC components. CERN specifications indicate a linear density of 6 x 106 protons over a 100 ns interval as the maximum charge safelymore » allowed to accumulate in the abort gap at 7 TeV. We present a study of an abort gap monitor, based on a photomultiplier tube with a gated microchannel plate, which would allow for detecting such low charge densities by monitoring the synchrotron radiation emitted in the dedicated diagnostics port. We show results of beam test experiments at the Advanced Light Source (ALS) using a Hamamatsu 5961U MCP-PMT, which indicate that such an instrument has the required sensitivity to meet LHC specifications.« less
Aaboud, M; Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Abeloos, B; AbouZeid, O S; Abraham, N L; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adachi, S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Ali, B; Aliev, M; Alimonti, G; Alison, J; Alkire, S P; Allbrooke, B M M; Allen, B W; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Alshehri, A A; Alstaty, M; Alvarez Gonzalez, B; Álvarez Piqueras, D; Alviggi, M G; Amadio, B T; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antel, C; Antonelli, M; Antonov, A; Antrim, D J; Anulli, F; Aoki, M; Aperio Bella, L; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Armitage, L J; Arnaez, O; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Artz, S; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Augsten, K; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Baca, M J; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Bagiacchi, P; Bagnaia, P; Bai, Y; Baines, J T; Bajic, M; Baker, O K; Baldin, E M; Balek, P; Balestri, T; Balli, F; Balunas, W K; Banas, E; Banerjee, Sw; Bannoura, A A E; Barak, L; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisits, M-S; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska-Blenessy, Z; Baroncelli, A; Barone, G; Barr, A J; Barranco Navarro, L; Barreiro, F; da Costa, J Barreiro Guimarães; Bartoldus, R; Barton, A E; Bartos, P; Basalaev, A; Bassalat, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Bechtle, P; Beck, H P; Becker, K; Becker, M; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bedognetti, M; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Bell, A S; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Belyaev, N L; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez, J; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Beringer, J; Berlendis, S; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertram, I A; Bertsche, C; Bertsche, D; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethani, A; Bethke, S; Bevan, A J; Bianchi, R M; Bianco, M; Biebel, O; Biedermann, D; Bielski, R; Biesuz, N V; Biglietti, M; De Mendizabal, J Bilbao; Billoud, T R V; Bilokon, H; Bindi, M; Bingul, A; Bini, C; Biondi, S; Bisanz, T; Bjergaard, D M; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blazek, T; Bloch, I; Blocker, C; Blue, A; Blum, W; Blumenschein, U; Blunier, S; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Boerner, D; Bogaerts, J A; Bogavac, D; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bokan, P; Bold, T; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Bortfeldt, J; Bortoletto, D; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Bossio Sola, J D; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Boutle, S K; Boveia, A; Boyd, J; Boyko, I R; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Breaden Madden, W D; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Broughton, J H; de Renstrom, P A Bruckman; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruni, L S; Brunt, B H; Bruschi, M; Bruscino, N; Bryant, P; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Budagov, I A; Buehrer, F; Bugge, M K; Bulekov, O; Bullock, D; Burckhart, H; Burdin, S; Burgard, C D; Burger, A M; Burghgrave, B; Burka, K; Burke, S; Burmeister, I; Burr, J T P; Busato, E; Büscher, D; Büscher, V; Bussey, P; Butler, J M; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, A R; Cabrera Urbán, S; Caforio, D; Cairo, V M; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Callea, G; Caloba, L P; Calvente Lopez, S; Calvet, D; Calvet, S; Calvet, T P; Camacho Toro, R; Camarda, S; Camarri, P; Cameron, D; Caminal Armadans, R; Camincher, C; Campana, S; Campanelli, M; Camplani, A; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Carbone, R M; Cardarelli, R; Cardillo, F; Carli, I; Carli, T; Carlino, G; Carlson, B T; Carminati, L; Carney, R M D; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Casper, D W; Castaneda-Miranda, E; Castelijn, R; Castelli, A; Castillo Gimenez, V; Castro, N F; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavallaro, E; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerda Alberich, L; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chan, S K; Chan, Y L; Chang, P; Chapman, J D; Charlton, D G; Chatterjee, A; Chau, C C; Chavez Barajas, C A; Che, S; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, H J; Cheng, Y; Cheplakov, A; Cheremushkina, E; El Moursli, R Cherkaoui; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiarelli, G; Chiodini, G; Chisholm, A S; Chitan, A; Chizhov, M V; Choi, K; Chomont, A R; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocca, C; Ciocio, A; Cirotto, F; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, B L; Clark, M R; Clark, P J; Clarke, R N; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Colasurdo, L; Cole, B; Colijn, A P; Collot, J; Colombo, T; Conde Muiño, P; Coniavitis, E; Connell, S H; Connelly, I A; Consorti, V; Constantinescu, S; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cormier, F; Cormier, K J R; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Crawley, S J; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cueto, A; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cúth, J; Czirr, H; Czodrowski, P; D'amen, G; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dado, T; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Dann, N S; Danninger, M; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, M; Davison, P; Dawe, E; Dawson, I; De, K; de Asmundis, R; De Benedetti, A; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Maria, A; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Dehghanian, N; Deigaard, I; Del Gaudio, M; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Denysiuk, D; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Dette, K; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Clemente, W K; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Micco, B; Di Nardo, R; Di Petrillo, K F; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Díez Cornell, S; Dimitrievska, A; Dingfelder, J; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dolejsi, J; Dolezal, Z; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Du, Y; Duarte-Campderros, J; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudder, A Chr; Duffield, E M; Duflot, L; Dührssen, M; Dumancic, M; Duncan, A K; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Duschinger, D; Dutta, B; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edwards, N C; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; Kacimi, M El; Ellajosyula, V; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Ennis, J S; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, F; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farina, C; Farina, E M; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Faucci Giannelli, M; Favareto, A; Fawcett, W J; Fayard, L; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Feremenga, L; Fernandez Martinez, P; Fernandez Perez, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; de Lima, D E Ferreira; Ferrer, A; Ferrere, D; Ferretti, C; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Flaschel, N; Fleck, I; Fleischmann, P; Fletcher, G T; Fletcher, R R M; Flick, T; Flierl, B M; Flores Castillo, L R; Flowerdew, M J; Forcolin, G T; Formica, A; Forti, A; Foster, A G; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Frate, M; Fraternali, M; Freeborn, D; Fressard-Batraneanu, S M; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fusayasu, T; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gach, G P; Gadatsch, S; Gagliardi, G; Gagnon, L G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Ganguly, S; Gao, J; Gao, Y; Gao, Y S; Garay Walls, F M; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gascon Bravo, A; Gasnikova, K; Gatti, C; Gaudiello, A; Gaudio, G; Gauthier, L; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Gecse, Z; Gee, C N P; Geich-Gimbel, Ch; Geisen, M; Geisler, M P; Gellerstedt, K; Gemme, C; Genest, M H; Geng, C; Gentile, S; Gentsos, C; George, S; Gerbaudo, D; Gershon, A; Ghasemi, S; Ghneimat, M; Giacobbe, B; Giagu, S; Giannetti, P; Gibson, S M; Gignac, M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuli, F; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Da Costa, J Goncalves Pinto Firmino; Gonella, G; Gonella, L; Gongadze, A; de la Hoz, S González; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Goshaw, A T; Gössling, C; Gostkin, M I; Goudet, C R; Goujdami, D; Goussiou, A G; Govender, N; Gozani, E; Graber, L; Grabowska-Bold, I; Gradin, P O J; Grafström, P; Gramling, J; Gramstad, E; Grancagnolo, S; Gratchev, V; Gravila, P M; Gray, H M; Graziani, E; Greenwood, Z D; Grefe, C; Gregersen, K; Gregor, I M; Grenier, P; Grevtsov, K; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Groh, S; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guan, W; Guenther, J; Guescini, F; Guest, D; Gueta, O; Gui, B; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Guo, W; Guo, Y; Gupta, R; Gupta, S; Gustavino, G; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Hadef, A; Hageböck, S; Hagihara, M; Hakobyan, H; Haleem, M; Haley, J; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamilton, A; Hamity, G N; Hamnett, P G; Han, L; Han, S; Hanagaki, K; Hanawa, K; Hance, M; Haney, B; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hartmann, N M; Hasegawa, M; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hayakawa, D; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, J J; Heinrich, L; Heinz, C; Hejbal, J; Helary, L; Hellman, S; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Henkelmann, S; Henriques Correia, A M; Henrot-Versille, S; Herbert, G H; Herde, H; Herget, V; Hernández Jiménez, Y; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hladik, O; Hoad, X; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohn, D; Holmes, T R; Homann, M; Honda, S; Honda, T; Hong, T M; Hooberman, B H; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howarth, J; Hoya, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, P J; Hsu, S-C; Hu, Q; Hu, S; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Huo, P; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Introzzi, G; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Ishijima, N; Ishino, M; Ishitsuka, M; Issever, C; Istin, S; Ito, F; Iturbe Ponce, J M; Iuppa, R; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, P; Jain, V; Jakobi, K B; Jakobs, K; Jakobsen, S; Jakoubek, T; Jamin, D O; Jana, D K; Jansky, R; Janssen, J; Janus, M; Janus, P A; Jarlskog, G; Javadov, N; Javůrek, T; Javurkova, M; Jeanneau, F; Jeanty, L; Jejelava, J; Jeng, G-Y; Jenni, P; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, H; Jiang, Y; Jiang, Z; Jiggins, S; Jimenez Pena, J; Jin, S; Jinaru, A; Jinnouchi, O; Jivan, H; Johansson, P; Johns, K A; Johnson, C A; Johnson, W J; Jon-And, K; Jones, G; Jones, R W L; Jones, S; Jones, T J; Jongmanns, J; Jorge, P M; Jovicevic, J; Ju, X; Juste Rozas, A; Köhler, M K; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kaji, T; Kajomovitz, E; Kalderon, C W; Kaluza, A; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneti, S; Kanjir, L; Kantserov, V A; Kanzaki, J; Kaplan, B; Kaplan, L S; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karentzos, E; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kasahara, K; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Kato, C; Katre, A; Katzy, J; Kawade, K; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazanin, V F; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khader, M; Khalil-Zada, F; Khanov, A; Kharlamov, A G; Kharlamova, T; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kido, S; Kilby, C R; Kim, H Y; Kim, S H; Kim, Y K; Kimura, N; Kind, O M; King, B T; King, M; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klioutchnikova, T; Kluge, E-E; Kluit, P; Kluth, S; Knapik, J; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Köhler, N M; Koi, T; Kolanoski, H; Kolb, M; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotwal, A; Koulouris, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Kowalewska, A B; Kowalewski, R; Kowalski, T Z; Kozakai, C; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kravchenko, A; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuechler, J T; Kuehn, S; Kugel, A; Kuger, F; Kuhl, T; Kukhtin, V; Kukla, R; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kuprash, O; Kurashige, H; Kurchaninov, L L; Kurochkin, Y A; Kurth, M G; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; La Rosa, A; Rosa Navarro, J L La; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lammers, S; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lanfermann, M C; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Lazovich, T; Lazzaroni, M; Le, B; Le Dortz, O; Le Guirriec, E; Le Quilleuc, E P; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, B; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmann Miotto, G; Lei, X; Leight, W A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Lerner, G; Leroy, C; Lesage, A A J; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, D; Leyton, M; Li, B; Li, C; Li, H; Li, L; Li, L; Li, Q; Li, S; Li, X; Li, Y; Liang, Z; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limosani, A; Lin, S C; Lin, T H; Lindquist, B E; Lionti, A E; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, H; Liu, H; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, Y L; Liu, Y; Livan, M; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E M; Loch, P; Loebinger, F K; Loew, K M; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Longo, L; Looper, K A; Lopez, J A; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lopez Solis, A; Lorenz, J; Lorenzo Martinez, N; Losada, M; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, H; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luedtke, C; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Luzi, P M; Lynn, D; Lysak, R; Lytken, E; Lyubushkin, V; Ma, H; Ma, L L; Ma, Y; Maccarrone, G; Macchiolo, A; Macdonald, C M; Maček, B; Machado Miguens, J; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeda, J; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, L; Mandić, I; Maneira, J; de Andrade Filho, L Manhaes; Manjarres Ramos, J; Mann, A; Manousos, A; Mansoulie, B; Mansour, J D; Mantifel, R; Mantoani, M; Manzoni, S; Mapelli, L; Marceca, G; March, L; Marchiori, G; Marcisovsky, M; Marjanovic, M; Marley, D E; Marroquim, F; Marsden, S P; Marshall, Z; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, M; Martinez Outschoorn, V I; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Maznas, I; Mazza, S M; Mc Fadden, N C; Goldrick, G Mc; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McClymont, L I; McDonald, E F; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melini, D; Mellado Garcia, B R; Melo, M; Meloni, F; Menary, S B; Meng, L; Meng, X T; Mengarelli, A; Menke, S; Meoni, E; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer Zu Theenhausen, H; Miano, F; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Minegishi, Y; Ming, Y; Mir, L M; Mistry, K P; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mizukami, A; Mjörnmark, J U; Mlynarikova, M; Moa, T; Mochizuki, K; Mogg, P; Mohapatra, S; Molander, S; Moles-Valls, R; Monden, R; Mondragon, M C; Mönig, K; Monk, J; Monnier, E; Montalbano, A; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, S; Mori, D; Mori, T; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moschovakos, P; Mosidze, M; Moss, H J; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Mullier, G A; Munoz Sanchez, F J; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Muškinja, M; Myagkov, A G; Myska, M; Nachman, B P; Nackenhorst, O; Nagai, K; Nagai, R; Nagano, K; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Naranjo Garcia, R F; Narayan, R; Narrias Villar, D I; Naryshkin, I; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nguyen Manh, T; Nickerson, R B; Nicolaidou, R; Nielsen, J; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Norjoharuddeen, N; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nurse, E; Nuti, F; O'grady, F; O'Neil, D C; O'Rourke, A A; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Ochoa-Ricoux, J P; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Oleiro Seabra, L F; Olivares Pino, S A; Oliveira Damazio, D; Olszewski, A; Olszowska, J; Onofre, A; Onogi, K; Onyisi, P U E; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Pacheco Rodriguez, L; Padilla Aranda, C; Pagan Griso, S; Paganini, M; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palazzo, S; Palestini, S; Palka, M; Pallin, D; Panagiotopoulou, E St; Panagoulias, I; Pandini, C E; Panduro Vazquez, J G; Pani, P; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, A J; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pascuzzi, V R; Pasqualucci, E; Passaggio, S; Pastore, Fr; Pásztor, G; Pataraia, S; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Penc, O; Peng, C; Peng, H; Penwell, J; Peralva, B S; Perego, M M; Perepelitsa, D V; Perez Codina, E; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petroff, P; Petrolo, E; Petrov, M; Petrucci, F; Pettersson, N E; Peyaud, A; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pilcher, J E; Pilkington, A D; Pin, A W J; Pinamonti, M; Pinfold, J L; Pingel, A; Pires, S; Pirumov, H; Pitt, M; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Poley, A; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pozo Astigarraga, M E; Pralavorio, P; Pranko, A; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Prokofiev, K; Prokoshin, F; Protopopescu, S; Proudfoot, J; Przybycien, M; Puddu, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Raine, J A; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Ratti, M G; Rauch, D M; Rauscher, F; Rave, S; Ravenscroft, T; Ravinovich, I; Raymond, M; Read, A L; Readioff, N P; Reale, M; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reed, R G; Reeves, K; Rehnisch, L; Reichert, J; Reiss, A; Rembser, C; Ren, H; Rescigno, M; Resconi, S; Resseguie, E D; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rifki, O; Rijssenbeek, M; Rimoldi, A; Rimoldi, M; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Rizzi, C; Roberts, R T; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodina, Y; Rodriguez Perez, A; Rodriguez, D; Roe, S; Rogan, C S; Røhne, O; Roloff, J; Romaniouk, A; Romano, M; Saez, S M Romano; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosien, N-A; Rossetti, V; Rossi, E; Rossi, L P; Rosten, J H N; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryu, S; Ryzhov, A; Rzehorz, G F; Saavedra, A F; Sabato, G; Sacerdoti, S; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Saha, P; Sahinsoy, M; Saimpert, M; Saito, T; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Salazar Loyola, J E; Salek, D; De Bruin, P H Sales; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sammel, D; Sampsonidis, D; Sánchez, J; Sanchez Martinez, V; Pineda, A Sanchez; Sandaker, H; Sandbach, R L; Sandhoff, M; Sandoval, C; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sato, K; Sauvan, E; Savage, G; Savard, P; Savic, N; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schachtner, B M; Schaefer, D; Schaefer, L; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schier, S; Schillo, C; Schioppa, M; Schlenker, S; Schmidt-Sommerfeld, K R; Schmieden, K; Schmitt, C; Schmitt, S; Schmitz, S; Schneider, B; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schott, M; Schouwenberg, J F P; Schovancova, J; Schramm, S; Schreyer, M; Schuh, N; Schulte, A; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwarz, T A; Schweiger, H; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shaikh, N W; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shirabe, S; Shiyakova, M; Shmeleva, A; Shoaleh Saadi, D; Shochet, M J; Shojaii, S; Shope, D R; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sickles, A M; Sidebo, P E; Sideras Haddad, E; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silverstein, S B; Simak, V; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simon, M; Sinervo, P; Sinev, N B; Sioli, M; Siragusa, G; Siral, I; Sivoklokov, S Yu; Sjölin, J; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Slovak, R; Smakhtin, V; Smart, B H; Smestad, L; Smiesko, J; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, J W; Smith, M N K; Smith, R W; Smizanska, M; Smolek, K; Snesarev, A A; Snyder, I M; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Sokhrannyi, G; Solans Sanchez, C A; Solar, M; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Son, H; Song, H Y; Sood, A; Sopczak, A; Sopko, V; Sorin, V; Sosa, D; Sotiropoulou, C L; Soualah, R; Soukharev, A M; South, D; Sowden, B C; Spagnolo, S; Spalla, M; Spangenberg, M; Spanò, F; Sperlich, D; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Denis, R D St; Stabile, A; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, G H; Stark, J; Stark, S H; Staroba, P; Starovoitov, P; Stärz, S; Staszewski, R; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Suchek, S; Sugaya, Y; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Suster, C J E; Sutton, M R; Suzuki, S; Svatos, M; Swiatlowski, M; Swift, S P; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tanaka, J; Tanaka, M; Tanaka, R; Tanaka, S; Tanioka, R; Tannenwald, B B; Tapia Araya, S; Tapprogge, S; Tarem, S; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, A C; Taylor, G N; Taylor, P T E; Taylor, W; Teischinger, F A; Teixeira-Dias, P; Temming, K K; Temple, D; Ten Kate, H; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Tibbetts, M J; Ticse Torres, R E; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todome, K; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Tong, B; Tornambe, P; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Trofymov, A; Troncon, C; Trottier-McDonald, M; Trovatelli, M; Truong, L; Trzebinski, M; Trzupek, A; Tseng, J C-L; Tsiareshka, P V; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsui, K M; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tu, Y; Tudorache, A; Tudorache, V; Tulbure, T T; Tuna, A N; Tupputi, S A; Turchikhin, S; Turgeman, D; Turk Cakir, I; Turra, R; Tuts, P M; Ucchielli, G; Ueda, I; Ughetto, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usui, J; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valdes Santurio, E; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Graaf, H; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vankov, P; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vasquez, J G; Vasquez, G A; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veeraraghavan, V; Veloce, L M; Veloso, F; Veneziano, S; Ventura, A; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigani, L; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vittori, C; Vivarelli, I; Vlachos, S; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wallangen, V; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, W; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Washbrook, A; Watkins, P M; Watson, A T; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Weber, S A; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M D; Werner, P; Wessels, M; Wetter, J; Whalen, K; Whallon, N L; Wharton, A M; White, A; White, M J; White, R; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilk, F; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winston, O J; Winter, B T; Wittgen, M; Wolf, T M H; Wolff, R; Wolter, M W; Wolters, H; Worm, S D; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xi, Z; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamaguchi, D; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yang, Z; Yao, W-M; Yap, Y C; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yuen, S P Y; Yusuff, I; Zabinski, B; Zacharis, G; Zaidan, R; Zaitsev, A M; Zakharchuk, N; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zeng, J C; Zeng, Q; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, G; Zhang, H; Zhang, J; Zhang, L; Zhang, L; Zhang, M; Zhang, R; Zhang, R; Zhang, X; Zhang, Y; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, M; Zhou, M; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zwalinski, L
2017-01-01
This paper describes the algorithms for the reconstruction and identification of electrons in the central region of the ATLAS detector at the Large Hadron Collider (LHC). These algorithms were used for all ATLAS results with electrons in the final state that are based on the 2012 pp collision data produced by the LHC at [Formula: see text] = 8 [Formula: see text]. The efficiency of these algorithms, together with the charge misidentification rate, is measured in data and evaluated in simulated samples using electrons from [Formula: see text], [Formula: see text] and [Formula: see text] decays. For these efficiency measurements, the full recorded data set, corresponding to an integrated luminosity of 20.3 fb[Formula: see text], is used. Based on a new reconstruction algorithm used in 2012, the electron reconstruction efficiency is 97% for electrons with [Formula: see text] [Formula: see text] and 99% at [Formula: see text] [Formula: see text]. Combining this with the efficiency of additional selection criteria to reject electrons from background processes or misidentified hadrons, the efficiency to reconstruct and identify electrons at the ATLAS experiment varies from 65 to 95%, depending on the transverse momentum of the electron and background rejection.
Abelev, B; Adam, J; Adamová, D; Adare, A M; Aggarwal, M M; Aglieri Rinella, G; Agocs, A G; Agostinelli, A; Aguilar Salazar, S; Ahammed, Z; Ahmad Masoodi, A; Ahmad, N; Ahn, S A; Ahn, S U; Akindinov, A; Aleksandrov, D; Alessandro, B; Alfaro Molina, R; Alici, A; Alkin, A; Almaráz Aviña, E; Alme, J; Alt, T; Altini, V; Altinpinar, S; Altsybeev, I; Andrei, C; Andronic, A; Anguelov, V; Anielski, J; Anson, C; Antičić, T; Antinori, F; Antonioli, P; Aphecetche, L; Appelshäuser, H; Arbor, N; Arcelli, S; Arend, A; Armesto, N; Arnaldi, R; Aronsson, T; Arsene, I C; Arslandok, M; Asryan, A; Augustinus, A; Averbeck, R; Awes, T C; Äystö, J; Azmi, M D; Bach, M; Badalà, A; Baek, Y W; Bailhache, R; Bala, R; Baldini Ferroli, R; Baldisseri, A; Baldit, A; Baltasar Dos Santos Pedrosa, F; Bán, J; Baral, R C; Barbera, R; Barile, F; Barnaföldi, G G; Barnby, L S; Barret, V; Bartke, J; Basile, M; Bastid, N; Basu, S; Bathen, B; Batigne, G; Batyunya, B; Baumann, C; Bearden, I G; Beck, H; Behera, N K; Belikov, I; Bellini, F; Bellwied, R; Belmont-Moreno, E; Bencedi, G; Beole, S; Berceanu, I; Bercuci, A; Berdnikov, Y; Berenyi, D; Bergognon, A A E; Berzano, D; Betev, L; Bhasin, A; Bhati, A K; Bhom, J; Bianchi, N; Bianchi, L; Bianchin, C; Bielčík, J; Bielčíková, J; Bilandzic, A; Bjelogrlic, S; Blanco, F; Blanco, F; Blau, D; Blume, C; Boccioli, M; Bock, N; Böttger, S; Bogdanov, A; Bøggild, H; Bogolyubsky, M; Boldizsár, L; Bombara, M; Book, J; Borel, H; Borissov, A; Bose, S; Bossú, F; Botje, M; Botta, E; Boyer, B; Braidot, E; Braun-Munzinger, P; Bregant, M; Breitner, T; Browning, T A; Broz, M; Brun, R; Bruna, E; Bruno, G E; Budnikov, D; Buesching, H; Bufalino, S; Busch, O; Buthelezi, Z; Caballero Orduna, D; Caffarri, D; Cai, X; Caines, H; Calvo Villar, E; Camerini, P; Canoa Roman, V; Cara Romeo, G; Carena, F; Carena, W; Carlin Filho, N; Carminati, F; Casanova Díaz, A; Castillo Castellanos, J; Castillo Hernandez, J F; Casula, E A R; Catanescu, V; Cavicchioli, C; Ceballos Sanchez, C; Cepila, J; Cerello, P; Chang, B; Chapeland, S; Charvet, J L; Chattopadhyay, S; Chattopadhyay, S; Chawla, I; Cherney, M; Cheshkov, C; Cheynis, B; Chibante Barroso, V; Chinellato, D D; Chochula, P; Chojnacki, M; Choudhury, S; Christakoglou, P; Christensen, C H; Christiansen, P; Chujo, T; Chung, S U; Cicalo, C; Cifarelli, L; Cindolo, F; Cleymans, J; Coccetti, F; Colamaria, F; Colella, D; Conesa Balbastre, G; Conesa Del Valle, Z; Constantin, P; Contin, G; Contreras, J G; Cormier, T M; Corrales Morales, Y; Cortese, P; Cortés Maldonado, I; Cosentino, M R; Costa, F; Cotallo, M E; Crescio, E; Crochet, P; Cruz Alaniz, E; Cuautle, E; Cunqueiro, L; Dainese, A; Dalsgaard, H H; Danu, A; Das, D; Das, K; Das, I; Dash, S; Dash, A; De, S; de Barros, G O V; De Caro, A; de Cataldo, G; de Cuveland, J; De Falco, A; De Gruttola, D; Delagrange, H; Deloff, A; Demanov, V; De Marco, N; Dénes, E; De Pasquale, S; Deppman, A; D Erasmo, G; de Rooij, R; Diaz Corchero, M A; Di Bari, D; Dietel, T; Di Giglio, C; Di Liberto, S; Di Mauro, A; Di Nezza, P; Divià, R; Djuvsland, Ø; Dobrin, A; Dobrowolski, T; Domínguez, I; Dönigus, B; Dordic, O; Driga, O; Dubey, A K; Dubla, A; Ducroux, L; Dupieux, P; Dutta Majumdar, M R; Dutta Majumdar, A K; Elia, D; Emschermann, D; Engel, H; Erazmus, B; Erdal, H A; Espagnon, B; Estienne, M; Esumi, S; Evans, D; Eyyubova, G; Fabris, D; Faivre, J; Falchieri, D; Fantoni, A; Fasel, M; Fearick, R; Fedunov, A; Fehlker, D; Feldkamp, L; Felea, D; Fenton-Olsen, B; Feofilov, G; Fernández Téllez, A; Ferretti, A; Ferretti, R; Festanti, A; Figiel, J; Figueredo, M A S; Filchagin, S; Finogeev, D; Fionda, F M; Fiore, E M; Floris, M; Foertsch, S; Foka, P; Fokin, S; Fragiacomo, E; Francescon, A; Frankenfeld, U; Fuchs, U; Furget, C; Fusco Girard, M; Gaardhøje, J J; Gagliardi, M; Gago, A; Gallio, M; Gangadharan, D R; Ganoti, P; Garabatos, C; Garcia-Solis, E; Garishvili, I; Gerhard, J; Germain, M; Geuna, C; Gheata, M; Gheata, A; Ghidini, B; Ghosh, P; Gianotti, P; Girard, M R; Giubellino, P; Gladysz-Dziadus, E; Glässel, P; Gomez, R; Ferreiro, E G; González-Trueba, L H; González-Zamora, P; Gorbunov, S; Goswami, A; Gotovac, S; Grabski, V; Graczykowski, L K; Grajcarek, R; Grelli, A; Grigoras, C; Grigoras, A; Grigoriev, V; Grigoryan, S; Grigoryan, A; Grinyov, B; Grion, N; Gros, P; Grosse-Oetringhaus, J F; Grossiord, J-Y; Grosso, R; Guber, F; Guernane, R; Guerra Gutierrez, C; Guerzoni, B; Guilbaud, M; Gulbrandsen, K; Gunji, T; Gupta, A; Gupta, R; Gutbrod, H; Haaland, Ø; Hadjidakis, C; Haiduc, M; Hamagaki, H; Hamar, G; Han, B H; Hanratty, L D; Hansen, A; Harmanová-Tóthová, Z; Harris, J W; Hartig, M; Hasegan, D; Hatzifotiadou, D; Hayrapetyan, A; Heckel, S T; Heide, M; Helstrup, H; Herghelegiu, A; Herrera Corral, G; Herrmann, N; Hess, B A; Hetland, K F; Hicks, B; Hille, P T; Hippolyte, B; Horaguchi, T; Hori, Y; Hristov, P; Hřivnáčová, I; Huang, M; Humanic, T J; Hwang, D S; Ichou, R; Ilkaev, R; Ilkiv, I; Inaba, M; Incani, E; Innocenti, P G; Innocenti, G M; Ippolitov, M; Irfan, M; Ivan, C; Ivanov, A; Ivanov, M; Ivanov, V; Ivanytskyi, O; Jachołkowski, A; Jacobs, P M; Jang, H J; Janik, R; Janik, M A; Jayarathna, P H S Y; Jena, S; Jha, D M; Jimenez Bustamante, R T; Jirden, L; Jones, P G; Jung, H; Jusko, A; Kaidalov, A B; Kakoyan, V; Kalcher, S; Kaliňák, P; Kalliokoski, T; Kalweit, A; Kang, J H; Kaplin, V; Karasu Uysal, A; Karavichev, O; Karavicheva, T; Karpechev, E; Kazantsev, A; Kebschull, U; Keidel, R; Khan, P; Khan, S A; Khan, M M; Khanzadeev, A; Kharlov, Y; Kileng, B; Kim, S; Kim, B; Kim, T; Kim, D J; Kim, D W; Kim, J H; Kim, J S; Kim, M; Kim, M; Kirsch, S; Kisel, I; Kiselev, S; Kisiel, A; Klay, J L; Klein, J; Klein-Bösing, C; Kliemant, M; Kluge, A; Knichel, M L; Knospe, A G; Koch, K; Köhler, M K; Kollegger, T; Kolojvari, A; Kondratiev, V; Kondratyeva, N; Konevskikh, A; Korneev, A; Kour, R; Kowalski, M; Kox, S; Koyithatta Meethaleveedu, G; Kral, J; Králik, I; Kramer, F; Kraus, I; Krawutschke, T; Krelina, M; Kretz, M; Krivda, M; Krizek, F; Krus, M; Kryshen, E; Krzewicki, M; Kucheriaev, Y; Kugathasan, T; Kuhn, C; Kuijer, P G; Kulakov, I; Kumar, J; Kurashvili, P; Kurepin, A B; Kurepin, A; Kuryakin, A; Kushpil, V; Kushpil, S; Kvaerno, H; Kweon, M J; Kwon, Y; Ladrón de Guevara, P; Lakomov, I; Langoy, R; La Pointe, S L; Lara, C; Lardeux, A; La Rocca, P; Lea, R; Le Bornec, Y; Lechman, M; Lee, S C; Lee, G R; Lee, K S; Lefèvre, F; Lehnert, J; Lenhardt, M; Lenti, V; León, H; Leoncino, M; León Monzón, I; León Vargas, H; Lévai, P; Lien, J; Lietava, R; Lindal, S; Lindenstruth, V; Lippmann, C; Lisa, M A; Liu, L; Loggins, V R; Loginov, V; Lohn, S; Lohner, D; Loizides, C; Loo, K K; Lopez, X; López Torres, E; Løvhøiden, G; Lu, X-G; Luettig, P; Lunardon, M; Luo, J; Luparello, G; Luquin, L; Luzzi, C; Ma, K; Ma, R; Madagodahettige-Don, D M; Maevskaya, A; Mager, M; Mahapatra, D P; Maire, A; Malaev, M; Maldonado Cervantes, I; Malinina, L; Mal'Kevich, M V D; Malzacher, P; Mamonov, A; Mangotra, L; Manko, V; Manso, F; Manzari, V; Mao, Y; Marchisone, M; Mareš, J; Margagliotti, G V; Margotti, A; Marín, A; Marin Tobon, C A; Markert, C; Marquard, M; Martashvili, I; Martinengo, P; Martínez, M I; Martínez Davalos, A; Martínez García, G; Martynov, Y; Mas, A; Masciocchi, S; Masera, M; Masoni, A; Massacrier, L; Mastroserio, A; Matthews, Z L; Matyja, A; Mayer, C; Mazer, J; Mazzoni, M A; Meddi, F; Menchaca-Rocha, A; Mercado Pérez, J; Meres, M; Miake, Y; Milano, L; Milosevic, J; Mischke, A; Mishra, A N; Miśkowiec, D; Mitu, C; Mlynarz, J; Mohanty, B; Molnar, L; Montaño Zetina, L; Monteno, M; Montes, E; Moon, T; Morando, M; Moreira De Godoy, D A; Moretto, S; Morsch, A; Muccifora, V; Mudnic, E; Muhuri, S; Mukherjee, M; Müller, H; Munhoz, M G; Musa, L; Musso, A; Nandi, B K; Nania, R; Nappi, E; Nattrass, C; Naumov, N P; Navin, S; Nayak, T K; Nazarenko, S; Nazarov, G; Nedosekin, A; Nicassio, M; Niculescu, M; Nielsen, B S; Niida, T; Nikolaev, S; Nikolic, V; Nikulin, S; Nikulin, V; Nilsen, B S; Nilsson, M S; Noferini, F; Nomokonov, P; Nooren, G; Novitzky, N; Nyanin, A; Nyatha, A; Nygaard, C; Nystrand, J; Ochirov, A; Oeschler, H; Oh, S; Oh, S K; Oleniacz, J; Oppedisano, C; Ortiz Velasquez, A; Ortona, G; Oskarsson, A; Ostrowski, P; Otwinowski, J; Oyama, K; Ozawa, K; Pachmayer, Y; Pachr, M; Padilla, F; Pagano, P; Paić, G; Painke, F; Pajares, C; Pal, S K; Palaha, A; Palmeri, A; Papikyan, V; Pappalardo, G S; Park, W J; Passfeld, A; Pastirčák, B; Patalakha, D I; Paticchio, V; Pavlinov, A; Pawlak, T; Peitzmann, T; Pereira Da Costa, H; Pereira De Oliveira Filho, E; Peresunko, D; Pérez Lara, C E; Perez Lezama, E; Perini, D; Perrino, D; Peryt, W; Pesci, A; Peskov, V; Pestov, Y; Petráček, V; Petran, M; Petris, M; Petrov, P; Petrovici, M; Petta, C; Piano, S; Piccotti, A; Pikna, M; Pillot, P; Pinazza, O; Pinsky, L; Pitz, N; Piyarathna, D B; Planinic, M; Płoskoń, M; Pluta, J; Pocheptsov, T; Pochybova, S; Podesta-Lerma, P L M; Poghosyan, M G; Polák, K; Polichtchouk, B; Pop, A; Porteboeuf-Houssais, S; Pospíšil, V; Potukuchi, B; Prasad, S K; Preghenella, R; Prino, F; Pruneau, C A; Pshenichnov, I; Puchagin, S; Puddu, G; Pulvirenti, A; Punin, V; Putiš, M; Putschke, J; Quercigh, E; Qvigstad, H; Rachevski, A; Rademakers, A; Räihä, T S; Rak, J; Rakotozafindrabe, A; Ramello, L; Ramírez Reyes, A; Raniwala, S; Raniwala, R; Räsänen, S S; Rascanu, B T; Rathee, D; Read, K F; Real, J S; Redlich, K; Reichelt, P; Reicher, M; Renfordt, R; Reolon, A R; Reshetin, A; Rettig, F; Revol, J-P; Reygers, K; Riccati, L; Ricci, R A; Richert, T; Richter, M; Riedler, P; Riegler, W; Riggi, F; Rodrigues Fernandes Rabacal, B; Rodríguez Cahuantzi, M; Rodriguez Manso, A; Røed, K; Rohr, D; Röhrich, D; Romita, R; Ronchetti, F; Rosnet, P; Rossegger, S; Rossi, A; Roy, P; Roy, C; Rubio Montero, A J; Rui, R; Russo, R; Ryabinkin, E; Rybicki, A; Sadovsky, S; Šafařík, K; Sahoo, R; Sahu, P K; Saini, J; Sakaguchi, H; Sakai, S; Sakata, D; Salgado, C A; Salzwedel, J; Sambyal, S; Samsonov, V; Sanchez Castro, X; Šándor, L; Sandoval, A; Sano, M; Sano, S; Santo, R; Santoro, R; Sarkamo, J; Scapparone, E; Scarlassara, F; Scharenberg, R P; Schiaua, C; Schicker, R; Schmidt, C; Schmidt, H R; Schreiner, S; Schuchmann, S; Schukraft, J; Schutz, Y; Schwarz, K; Schweda, K; Scioli, G; Scomparin, E; Scott, R; Segato, G; Selyuzhenkov, I; Senyukov, S; Seo, J; Serci, S; Serradilla, E; Sevcenco, A; Shabetai, A; Shabratova, G; Shahoyan, R; Sharma, N; Sharma, S; Rohni, S; Shigaki, K; Shimomura, M; Shtejer, K; Sibiriak, Y; Siciliano, M; Sicking, E; Siddhanta, S; Siemiarczuk, T; Silvermyr, D; Silvestre, C; Simatovic, G; Simonetti, G; Singaraju, R; Singh, R; Singha, S; Singhal, V; Sinha, B C; Sinha, T; Sitar, B; Sitta, M; Skaali, T B; Skjerdal, K; Smakal, R; Smirnov, N; Snellings, R J M; Søgaard, C; Soltz, R; Son, H; Song, M; Song, J; Soos, C; Soramel, F; Sputowska, I; Spyropoulou-Stassinaki, M; Srivastava, B K; Stachel, J; Stan, I; Stan, I; Stefanek, G; Steinpreis, M; Stenlund, E; Steyn, G; Stiller, J H; Stocco, D; Stolpovskiy, M; Strabykin, K; Strmen, P; Suaide, A A P; Subieta Vásquez, M A; Sugitate, T; Suire, C; Sukhorukov, M; Sultanov, R; Šumbera, M; Susa, T; Symons, T J M; Szanto de Toledo, A; Szarka, I; Szczepankiewicz, A; Szostak, A; Szymański, M; Takahashi, J; Tapia Takaki, J D; Tauro, A; Tejeda Muñoz, G; Telesca, A; Terrevoli, C; Thäder, J; Thomas, D; Tieulent, R; Timmins, A R; Tlusty, D; Toia, A; Torii, H; Toscano, L; Trubnikov, V; Truesdale, D; Trzaska, W H; Tsuji, T; Tumkin, A; Turrisi, R; Tveter, T S; Ulery, J; Ullaland, K; Ulrich, J; Uras, A; Urbán, J; Urciuoli, G M; Usai, G L; Vajzer, M; Vala, M; Valencia Palomo, L; Vallero, S; Vande Vyvre, P; van Leeuwen, M; Vannucci, L; Vargas, A; Varma, R; Vasileiou, M; Vasiliev, A; Vechernin, V; Veldhoen, M; Venaruzzo, M; Vercellin, E; Vergara, S; Vernet, R; Verweij, M; Vickovic, L; Viesti, G; Vikhlyantsev, O; Vilakazi, Z; Villalobos Baillie, O; Vinogradov, Y; Vinogradov, A; Vinogradov, L; Virgili, T; Viyogi, Y P; Vodopyanov, A; Voloshin, S; Voloshin, K; Volpe, G; von Haller, B; Vranic, D; Øvrebekk, G; Vrláková, J; Vulpescu, B; Vyushin, A; Wagner, V; Wagner, B; Wan, R; Wang, M; Wang, D; Wang, Y; Wang, Y; Watanabe, K; Weber, M; Wessels, J P; Westerhoff, U; Wiechula, J; Wikne, J; Wilde, M; Wilk, A; Wilk, G; Williams, M C S; Windelband, B; Xaplanteris Karampatsos, L; Yaldo, C G; Yamaguchi, Y; Yang, H; Yang, S; Yasnopolskiy, S; Yi, J; Yin, Z; Yoo, I-K; Yoon, J; Yu, W; Yuan, X; Yushmanov, I; Zaccolo, V; Zach, C; Zampolli, C; Zaporozhets, S; Zarochentsev, A; Závada, P; Zaviyalov, N; Zbroszczyk, H; Zelnicek, P; Zgura, I S; Zhalov, M; Zhang, X; Zhang, H; Zhou, D; Zhou, Y; Zhou, F; Zhu, J; Zhu, J; Zhu, X; Zichichi, A; Zimmermann, A; Zinovjev, G; Zoccarato, Y; Zynovyev, M; Zyzak, M
Measurements of cross sections of inelastic and diffractive processes in proton-proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions were determined from a study of gaps in charged particle pseudorapidity distributions: for single diffraction (diffractive mass M X <200 GeV/ c 2 ) [Formula: see text], and [Formula: see text], respectively at centre-of-mass energies [Formula: see text]; for double diffraction (for a pseudorapidity gap Δ η >3) σ DD / σ INEL =0.11±0.03,0.12±0.05, and [Formula: see text], respectively at [Formula: see text]. To measure the inelastic cross section, beam properties were determined with van der Meer scans, and, using a simulation of diffraction adjusted to data, the following values were obtained: [Formula: see text] mb at [Formula: see text] and [Formula: see text] at [Formula: see text]. The single- and double-diffractive cross sections were calculated combining relative rates of diffraction with inelastic cross sections. The results are compared to previous measurements at proton-antiproton and proton-proton colliders at lower energies, to measurements by other experiments at the LHC, and to theoretical models.
NASA Astrophysics Data System (ADS)
Tu, Zhoudunming
2018-01-01
Studies of charge-dependent azimuthal correlations for the same- and oppositesign particle pairs are presented in PbPb collisions at 5 TeV and pPb collisions at 5 and 8.16 TeV, with the CMS experiment at the LHC. The azimuthal correlations are evaluated with respect to the second- and also higher-order event planes, as a function of particle pseudorapidity and transverse momentum, and event multiplicity. By employing an event-shape engineering technique, the dependence of correlations on azimuthal anisotropy flow is investigated. Results presented provide new insights to the origin of observed charge-dependent azimuthal correlations, and have important implications to the search for the chiral magnetic effect in heavy ion collisions.
BigData and computing challenges in high energy and nuclear physics
NASA Astrophysics Data System (ADS)
Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.
2017-06-01
In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syphers, M. J.; Chattopadhyay, S.
An overview is provided of the currently envisaged landscape of charged particle accelerators at the energy and intensity frontiers to explore particle physics beyond the standard model via 1-100 TeV-scale lepton and hadron colliders and multi-Megawatt proton accelerators for short- and long- baseline neutrino experiments. The particle beam physics, associated technological challenges and progress to date for these accelerator facilities (LHC, HL-LHC, future 100 TeV p-p colliders, Tev-scale linear and circular electron-positron colliders, high intensity proton accelerator complex PIP-II for DUNE and future upgrade to PIP-III) are outlined. Potential and prospects for advanced “nonlinear dynamic techniques” at the multi-MW levelmore » intensity frontier and advanced “plasma- wakefield-based techniques” at the TeV-scale energy frontier and are also described.« less
NASA Astrophysics Data System (ADS)
Merola, M.; CMS Collaboration
2016-04-01
A search for single top-quark production in the s channel in proton-proton collisions at a centre-of-mass energy of √{ s} = 8 TeV by the CMS detector at the LHC is presented. Leptonic decay modes of the top quark with an electron or muon in the final state are considered. The signal is extracted by performing a maximum-likelihood fit to the distribution of a multivariate discriminant defined using Boosted Decision Trees to separate the expected signal contribution from the background processes. Data collected in 2012, corresponding to an integrated luminosity of 19.3/fb, lead to an upper limit on the cross section times branching ratio of 11.5 pb at 95% confidence level.
Top quark forward-backward asymmetry and same-sign top quark pairs.
Berger, Edmond L; Cao, Qing-Hong; Chen, Chuan-Ren; Li, Chong Sheng; Zhang, Hao
2011-05-20
The top quark forward-backward asymmetry measured at the Tevatron collider shows a large deviation from standard model expectations. Among possible interpretations, a nonuniversal Z' model is of particular interest as it naturally predicts a top quark in the forward region of large rapidity. To reproduce the size of the asymmetry, the couplings of the Z' to standard model quarks must be large, inevitably leading to copious production of same-sign top quark pairs at the energies of the Large Hadron Collider (LHC). We explore the discovery potential for tt and ttj production in early LHC experiments at 7-8 TeV and conclude that if no tt signal is observed with 1 fb⁻¹ of integrated luminosity, then a nonuniversal Z' alone cannot explain the Tevatron forward-backward asymmetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
A search for W' bosons in events with one lepton (electron or muon) and missing transverse momentum is presented. The search uses 3.2 fb -1 of pp collision data collected at √s=13 TeV by the ATLAS experiment at the LHC in 2015. The transverse mass distribution is examined and no significant excess of events above the level expected from Standard Model processes is observed. Upper limits on the W' boson cross-section times branching ratio to leptons are set as a function of the W' mass. Within the Sequential Standard Model W ' masses below 4.07 TeV are excluded at the 95%more » confidence level. This extends the limit set using LHC data at √s=8 TeV by around 800 GeV.« less
Particle physics today, tomorrow and beyond
NASA Astrophysics Data System (ADS)
Ellis, John
2018-01-01
The most important discovery in particle physics in recent years was that of the Higgs boson, and much effort is continuing to measure its properties, which agree obstinately with the Standard Model, so far. However, there are many reasons to expect physics beyond the Standard Model, motivated by the stability of the electroweak vacuum, the existence of dark matter and the origin of the visible matter in the Universe, neutrino physics, the hierarchy of mass scales in physics, cosmological inflation and the need for a quantum theory for gravity. Most of these issues are being addressed by the experiments during Run 2 of the LHC, and supersymmetry could help resolve many of them. In addition to the prospects for the LHC, I also review briefly those for direct searches for dark matter and possible future colliders.
ALICE in the early Universe wonderland
NASA Astrophysics Data System (ADS)
Di Nezza, Pasquale
2012-03-01
In these years the Large Hadron Collider (LHC) at CERN is probing, for the first time, physics at energy scales more than an order of magnitude beyond that of the Standard Model. These experiments explore an energy regime of particle physics where phenomena, such as supersymmetry and Grand Unified Theories, may become relevant. Certainly, the LHC should shed light on the mechanism of electroweak symmetry breaking and may discover the first fundamental scalar particle seen in nature. The collisions of heavy ions (Pb - Pb) will create the same "soup" the early Universe had at the epoch of 10-5 seconds. In general, there is a strong and growing interplay between particle physics and cosmology, in particular in the possible production of mini black holes and dark matter candidates like the lightest neutralino in the MSSM.
Linear Collider Physics Resource Book Snowmass 2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronan
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decademore » or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and built in a few years, it would make sense to wait for the results of each accelerator before planning the next one. Thus, we would wait for the results from the Tevatron before planning the LHC experiments, and wait for the LHC before planning any later stage. In reality accelerators require a long time to construct, and they require such specialized resources and human talent that delay can cripple what would be promising opportunities. In any event, we believe that the case for the linear collider is so compelling and robust that we can justify this facility on the basis of our current knowledge, even before the Tevatron and LHC experiments are done. The physics prospects for the linear collider have been studied intensively for more than a decade, and arguments for the importance of its experimental program have been developed from many different points of view. This book provides an introduction and a guide to this literature. We hope that it will allow physicists new to the consideration of linear collider physics to start from their own personal perspectives and develop their own assessments of the opportunities afforded by a linear collider.« less
NASA Astrophysics Data System (ADS)
Guthoff, Moritz; Afanaciev, Konstantin; Dabrowski, Anne; de Boer, Wim; Lange, Wolfgang; Lohmann, Wolfgang; Stickland, David
2013-12-01
The Beam Condition Monitor (BCM) of the CMS detector at the LHC is a protection device similar to the LHC Beam Loss Monitor system. While the electronics used is the same, poly-crystalline Chemical Vapor Deposition (pCVD) diamonds are used instead of ionization chambers as the BCM sensor material. The main purpose of the system is the protection of the silicon Pixel and Strip tracking detectors by inducing a beam dump, if the beam losses are too high in the CMS detector. By comparing the detector current with the instantaneous luminosity, the BCM detector efficiency can be monitored. The number of radiation-induced defects in the diamond, reduces the charge collection distance, and hence lowers the signal. The number of these induced defects can be simulated using the FLUKA Monte Carlo simulation. The cross-section for creating defects increases with decreasing energies of the impinging particles. This explains, why diamond sensors mounted close to heavy calorimeters experience more radiation damage, because of the high number of low energy neutrons in these regions. The signal decrease was stronger than expected from the number of simulated defects. Here polarization from trapped charge carriers in the defects is a likely candidate for explaining the difference, as suggested by Transient Current Technique (TCT) measurements. A single-crystalline (sCVD) diamond sensor shows a faster relative signal decrease than a pCVD sensor mounted at the same location. This is expected, since the relative increase in the number of defects is larger in sCVD than in pCVD sensors.
Performance profiling for brachytherapy applications
NASA Astrophysics Data System (ADS)
Choi, Wonqook; Cho, Kihyeon; Yeo, Insung
2018-05-01
In many physics applications, a significant amount of software (e.g. R, ROOT and Geant4) is developed on novel computing architectures, and much effort is expended to ensure the software is efficient in terms of central processing unit (CPU) time and memory usage. Profiling tools are used during the evaluation process to evaluate the efficiency; however, few such tools are able to accommodate low-energy physics regions. To address this limitation, we developed a low-energy physics profiling system in Geant4 to profile the CPU time and memory of software applications in brachytherapy applications. This paper describes and evaluates specific models that are applied to brachytherapy applications in Geant4, such as QGSP_BIC_LIV, QGSP_BIC_EMZ, and QGSP_BIC_EMY. The physics range in this tool allows it to be used to generate low energy profiles in brachytherapy applications. This was a limitation in previous studies, which caused us to develop a new profiling tool that supports profiling in the MeV range, in contrast to the TeV range that is supported by existing high-energy profiling tools. In order to easily compare the profiling results between low-energy and high-energy modes, we employed the same software architecture as that in the SimpliCarlo tool developed at the Fermilab National Accelerator Laboratory (FNAL) for the Large Hadron Collider (LHC). The results show that the newly developed profiling system for low-energy physics (less than MeV) complements the current profiling system used for high-energy physics (greater than TeV) applications.
NASA Astrophysics Data System (ADS)
Zhang, Zhicai
2018-04-01
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high-resolution electron and photon energy measurements. Following the excellent performance achieved during LHC Run I at center-of-mass energies of 7 and 8 TeV, the CMS electromagnetic calorimeter (ECAL) is operating at the LHC with proton-proton collisions at 13 TeV center-of-mass energy. The instantaneous luminosity delivered by the LHC during Run II has achieved unprecedented levels. The average number of concurrent proton-proton collisions per bunch-crossing (pileup) has reached up to 40 interactions in 2016 and may increase further in 2017. These high pileup levels necessitate a retuning of the ECAL readout and trigger thresholds and reconstruction algorithms. In addition, the energy response of the detector must be precisely calibrated and monitored. We present new reconstruction algorithms and calibration strategies that were implemented to maintain the excellent performance of the CMS ECAL throughout Run II. We will show performance results from the 2015-2016 data taking periods and provide an outlook on the expected Run II performance in the years to come. Beyond the LHC, challenging running conditions for CMS are expected after the High-Luminosity upgrade of the LHC (HL-LHC) . We review the design and R&D studies for the CMS ECAL and present first test beam studies. Particular challenges at HL-LHC are the harsh radiation environment, the increasing data rates, and the extreme level of pile-up events, with up to 200 simultaneous proton-proton collisions. We present test beam results of hadron irradiated PbWO crystals up to fluences expected at the HL-LHC . We also report on the R&D for the new readout and trigger electronics, which must be upgraded due to the increased trigger and latency requirements at the HL-LHC.
2010-01-01
Background The extended light-harvesting complex (LHC) protein superfamily is a centerpiece of eukaryotic photosynthesis, comprising the LHC family and several families involved in photoprotection, like the LHC-like and the photosystem II subunit S (PSBS). The evolution of this complex superfamily has long remained elusive, partially due to previously missing families. Results In this study we present a meticulous search for LHC-like sequences in public genome and expressed sequence tag databases covering twelve representative photosynthetic eukaryotes from the three primary lineages of plants (Plantae): glaucophytes, red algae and green plants (Viridiplantae). By introducing a coherent classification of the different protein families based on both, hidden Markov model analyses and structural predictions, numerous new LHC-like sequences were identified and several new families were described, including the red lineage chlorophyll a/b-binding-like protein (RedCAP) family from red algae and diatoms. The test of alternative topologies of sequences of the highly conserved chlorophyll-binding core structure of LHC and PSBS proteins significantly supports the independent origins of LHC and PSBS families via two unrelated internal gene duplication events. This result was confirmed by the application of cluster likelihood mapping. Conclusions The independent evolution of LHC and PSBS families is supported by strong phylogenetic evidence. In addition, a possible origin of LHC and PSBS families from different homologous members of the stress-enhanced protein subfamily, a diverse and anciently paralogous group of two-helix proteins, seems likely. The new hypothesis for the evolution of the extended LHC protein superfamily proposed here is in agreement with the character evolution analysis that incorporates the distribution of families and subfamilies across taxonomic lineages. Intriguingly, stress-enhanced proteins, which are universally found in the genomes of green plants, red algae, glaucophytes and in diatoms with complex plastids, could represent an important and previously missing link in the evolution of the extended LHC protein superfamily. PMID:20673336
Re-designing the PhEDEx Security Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C.-H.; Wildish, T.; Zhang, X.
2014-01-01
PhEDEx, the data-placement tool used by the CMS experiment at the LHC, was conceived in a more trusting time. The security model provided a safe environment for site agents and operators, but offerred little more protection than that. Data was not sufficiently protected against loss caused by operator error or software bugs or by deliberate manipulation of the database. Operators were given high levels of access to the database, beyond what was actually needed to accomplish their tasks. This exposed them to the risk of suspicion should an incident occur. Multiple implementations of the security model led to difficulties maintainingmore » code, which can lead to degredation of security over time. In order to meet the simultaneous goals of protecting CMS data, protecting the operators from undue exposure to risk, increasing monitoring capabilities and improving maintainability of the security model, the PhEDEx security model was redesigned and re-implemented. Security was moved from the application layer into the database itself, fine-grained access roles were established, and tools and procedures created to control the evolution of the security model over time. In this paper we describe this work, we describe the deployment of the new security model, and we show how these enhancements improve security on several fronts simultaneously.« less
Online data handling and storage at the CMS experiment
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
Re-designing the PhEDEx Security Model
NASA Astrophysics Data System (ADS)
C-H, Huang; Wildish, T.; X, Zhang
2014-06-01
PhEDEx, the data-placement tool used by the CMS experiment at the LHC, was conceived in a more trusting time. The security model provided a safe environment for site agents and operators, but offerred little more protection than that. Data was not sufficiently protected against loss caused by operator error or software bugs or by deliberate manipulation of the database. Operators were given high levels of access to the database, beyond what was actually needed to accomplish their tasks. This exposed them to the risk of suspicion should an incident occur. Multiple implementations of the security model led to difficulties maintaining code, which can lead to degredation of security over time. In order to meet the simultaneous goals of protecting CMS data, protecting the operators from undue exposure to risk, increasing monitoring capabilities and improving maintainability of the security model, the PhEDEx security model was redesigned and re-implemented. Security was moved from the application layer into the database itself, fine-grained access roles were established, and tools and procedures created to control the evolution of the security model over time. In this paper we describe this work, we describe the deployment of the new security model, and we show how these enhancements improve security on several fronts simultaneously.
Online Data Handling and Storage at the CMS Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J. M.; et al.
2015-12-23
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less
Health and performance monitoring of the online computer cluster of CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, G.; et al.
2012-01-01
The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.
Fermilab Heroes of the LHC: Joel Butler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Joel
2017-08-23
Particle physics research is both international and collaborative, with large national laboratories working together to most efficiently advance science. Joel Butler, Distinguished Scientist at Fermi National Accelerator Laboratory is the leader of the Compact Muon Solenoid experiment at the CERN laboratory in Europe. In this video, Joel tells us a bit about what it’s like.
A browser-based event display for the CMS experiment at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hategan, M.; McCauley, T.; Nguyen, P.
2012-01-01
The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to deploy deploy tools in a highly-restrictive computing environment. The I2U2 collaboration has developed a browser-based event display for viewing events in data collected and released to the public by the CMS experiment at the LHC. The application itself reads a JSON event format and uses the JavaScript 3D rendering engine pre3d.more » The only requirement is a modern browser using HTML5 canvas. The event display has been used by thousands of high school students in the context of programs organized by I2U2, QuarkNet, and IPPOG. This browser-based approach to display of events can have broader usage and impact for experts and public alike.« less
Upgrade project and plans for the ATLAS detector and trigger
NASA Astrophysics Data System (ADS)
Pastore, Francesca; Atlas Collaboration
2013-08-01
The LHC is expected to under go upgrades over the coming years in order to extend its scientific potential. Through two different phases (namely Phase-I and Phase-II), the average luminosity will be increased by a factor 5-10 above the design luminosity, 1034 cm-2 s-1. Consequently, the LHC experiments will need upgraded detectors and new infrastructure of the trigger and DAQ systems, to take into account the increase of radiation level and of particle rates foreseen at such high luminosity. In this paper we describe the planned changes and the investigations for the ATLAS experiment, focusing on the requirements for the trigger system to handle the increase rate of collisions per beam crossing, while maintaining widely inclusive selections. In different steps, the trigger detectors will improve their selectivity by benefiting from increased granularity. To improve the flexibility of the system, the use of the tracking information in the lower levels of the trigger selection is also discussed. Lastly different scenarios are compared, based on the expected physics potential of ATLAS in this high luminosity regime.
Pushing HTCondor and glideinWMS to 200K+ Jobs in a Global Pool for CMS before Run 2
NASA Astrophysics Data System (ADS)
Balcas, J.; Belforte, S.; Bockelman, B.; Gutsche, O.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mason, D.; McCrea, A.; Saiz-Santos, M.; Sfiligoi, I.
2015-12-01
The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning system. So far we have been running several independent resource pools, but we are working on unifying them all to reduce the operational load and more effectively share resources between various activities in CMS. The major challenge of this unification activity is scale. The combined pool size is expected to reach 200K job slots, which is significantly bigger than any other multi-user HTCondor based system currently in production. To get there we have studied scaling limitations in our existing pools, the biggest of which tops out at about 70K slots, providing valuable feedback to the development communities, who have responded by delivering improvements which have helped us reach higher and higher scales with more stability. We have also worked on improving the organization and support model for this critical service during Run 2 of the LHC. This contribution will present the results of the scale testing and experiences from the first months of running the Global Pool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Incandela, Joseph
In July of 2012, scientists leading two different research teams, working independently of each other, announced that they had almost certain proof of the long-sought Higgs boson. Though Cern did not call the discovery "official", many physicists conceded the evidence was now so compelling they had surely found the missing particle. The formal confirmation will come over the next few months of further investigation. The experiments are taking place at the Large Hadron Collider (LHC), and this third lecture in the DOE Science Speaker Series is given by one of those announcing scientists in July. He is Dr. Joseph Incandela,more » the current spokesperson for the Compact Muon Solenoid (CMS) Experiment at CERN. He was heavily involved in the search for the top quark at Fermi and is from the University of California, Santa Barbara. The title he gives his presentation is "Searching for the genetic code of our universe: Discovery at the LHC."« less
ATLAS Tile calorimeter calibration and monitoring systems
NASA Astrophysics Data System (ADS)
Chomont, Arthur; ATLAS Collaboration
2017-11-01
The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, from scintillation light to digitization. Based on LHC Run 1 experience, several calibration systems were improved for Run 2. The lessons learned, the modifications, and the current LHC Run 2 performance are discussed.
Search for stable hadronising squarks and gluinos with the ATLAS experiment at the LHC
NASA Astrophysics Data System (ADS)
Aad, G.; Abbott, B.; Abdallah, J.; Abdelalim, A. A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acerbi, E.; Acharya, B. S.; Adams, D. L.; Addy, T. N.; Adelman, J.; Aderholz, M.; Adomeit, S.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J. A.; Aharrouche, M.; Ahlen, S. P.; Ahles, F.; Ahmad, A.; Ahsan, M.; Aielli, G.; Akdogan, T.; Åkesson, T. P.; Akimoto, G.; Akimov, A. V.; Alam, M. S.; Alam, M. A.; Albrand, S.; Aleksa, M.; Aleksandrov, I. N.; Aleppo, M.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alviggi, M. G.; Amako, K.; Amaral, P.; Amelung, C.; Ammosov, V. V.; Amorim, A.; Amorós, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C. F.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Andrieux, M.-L.; Anduaga, X. S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antos, J.; Anulli, F.; Aoun, S.; Bella, L. Aperio; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Archambault, J. P.; Arfaoui, S.; Arguin, J.-F.; Arik, E.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Artoni, G.; Arutinov, D.; Asai, S.; Asfandiyarov, R.; Ask, S.; Åsman, B.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Aubert, B.; Auerbach, B.; Auge, E.; Augsten, K.; Aurousseau, M.; Austin, N.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baccaglioni, G.; Bacci, C.; Bach, A. M.; Bachacou, H.; Bachas, K.; Bachy, G.; Backes, M.; Backhaus, M.; Badescu, E.; Bagnaia, P.; Bahinipati, S.; Bai, Y.; Bailey, D. C.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, M. D.; Baker, S.; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banerjee, P.; Banerjee, Sw.; Banfi, D.; Bangert, A.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barashkou, A.; Barbaro Galtieri, A.; Barber, T.; Barberio, E. L.; Barberis, D.; Barbero, M.; Bardin, D. Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Baroncelli, A.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Barrillon, P.; Bartoldus, R.; Barton, A. E.; Bartsch, D.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battaglia, A.; Battistin, M.; Battistoni, G.; Bauer, F.; Bawa, H. S.; Beare, B.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Beckingham, M.; Becks, K. H.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bee, C.; Begel, M.; Harpaz, S. Behar; Behera, P. K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, G.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Benchouk, C.; Bendel, M.; Benedict, B. H.; Benekos, N.; Benhammou, Y.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernardet, K.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Bertinelli, F.; Bertolucci, F.; Besana, M. I.; Besson, N.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K. M.; Blair, R. E.; Blanchard, J.-B.; Blanchot, G.; Blocker, C.; Blocki, J.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. B.; Bocci, A.; Boddy, C. R.; Boehler, M.; Boek, J.; Boelaert, N.; Böser, S.; Bogaerts, J. A.; Bogdanchikov, A.; Bogouch, A.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Bona, M.; Bondarenko, V. G.; Boonekamp, M.; Boorman, G.; Booth, C. N.; Booth, P.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Botterill, D.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E. V.; Boulahouache, C.; Bourdarios, C.; Bousson, N.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozhko, N. I.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Brambilla, E.; Branchini, P.; Brandenburg, G. W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Breton, D.; Brett, N. D.; Bright-Thomas, P. G.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brodbeck, T. J.; Brodet, E.; Broggi, F.; Bromberg, C.; Brooijmans, G.; Brooks, W. K.; Brown, G.; Brubaker, E.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Buanes, T.; Bucci, F.; Buchanan, J.; Buchanan, N. J.; Buchholz, P.; Buckingham, R. M.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Budick, B.; Büscher, V.; Bugge, L.; Buira-Clark, D.; Buis, E. J.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C. P.; Butin, F.; Butler, B.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Buttinger, W.; Byatt, T.; Cabrera Urbán, S.; Caccia, M.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Caloi, R.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camard, A.; Camarri, P.; Cambiaghi, M.; Cameron, D.; Cammin, J.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capriotti, D.; Capua, M.; Caputo, R.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carpentieri, C.; Carrillo Montoya, G. D.; Carter, A. A.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Cascella, M.; Caso, C.; Castaneda Hernandez, A. M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N. F.; Cataldi, G.; Cataneo, F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caughron, S.; Cauz, D.; Cavallari, A.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Cazzato, A.; Ceradini, F.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S. A.; Cevenini, F.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapleau, B.; Chapman, J. D.; Chapman, J. W.; Chareyre, E.; Charlton, D. G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chen, H.; Chen, L.; Chen, S.; Chen, T.; Chen, X.; Cheng, S.; Cheplakov, A.; Chepurnov, V. F.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Cheung, S. L.; Chevalier, L.; Chevallier, F.; Chiefari, G.; Chikovani, L.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chizhov, M. V.; Choudalakis, G.; Chouridou, S.; Christidi, I. A.; Christov, A.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Ciapetti, G.; Ciba, K.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M. D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Cleland, W.; Clemens, J. C.; Clement, B.; Clement, C.; Clifft, R. W.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coe, P.; Cogan, J. G.; Coggeshall, J.; Cogneras, E.; Cojocaru, C. D.; Colas, J.; Colijn, A. P.; Collard, C.; Collins, N. J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Coluccia, R.; Comune, G.; Conde Muiño, P.; Coniavitis, E.; Conidi, M. C.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cook, J.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Costin, T.; Côté, D.; Torres, R. Coura; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B. E.; Cranmer, K.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crépé-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Cuneo, S.; Curatolo, M.; Curtis, C. J.; Cwetanski, P.; Czirr, H.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; da Rocha Gesualdi Mello, A.; da Silva, P. V. M.; da Via, C.; Dabrowski, W.; Dahlhoff, A.; Dai, T.; Dallapiccola, C.; Dallison, S. J.; Dam, M.; Dameri, M.; Damiani, D. S.; Danielsson, H. O.; Dankers, R.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G. L.; Daum, C.; Dauvergne, J. P.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davies, M.; Davison, A. R.; Dawe, E.; Dawson, I.; Dawson, J. W.; Daya, R. K.; de, K.; de Asmundis, R.; de Castro, S.; de Castro Faria Salgado, P. E.; de Cecco, S.; de Graat, J.; de Groot, N.; de Jong, P.; de La Taille, C.; de Lotto, B.; de Mora, L.; de Nooij, L.; de Oliveira Branco, M.; de Pedis, D.; de Saintignon, P.; de Salvo, A.; de Sanctis, U.; de Santo, A.; de Vivie de Regie, J. B.; Dean, S.; Dedovich, D. V.; Degenhardt, J.; Dehchar, M.; Deile, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; Della Volpe, D.; Delmastro, M.; Delpierre, P.; Delruelle, N.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Devetak, E.; Deviveiros, P. O.; Dewhurst, A.; Dewilde, B.; Dhaliwal, S.; Dhullipudi, R.; di Ciaccio, A.; di Ciaccio, L.; di Girolamo, A.; di Girolamo, B.; di Luise, S.; di Mattia, A.; di Micco, B.; di Nardo, R.; di Simone, A.; di Sipio, R.; Diaz, M. A.; Diblen, F.; Diehl, E. B.; Dietl, H.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; Do Vale, M. A. B.; Do Valle Wemans, A.; Doan, T. K. O.; Dobbs, M.; Dobinson, R.; Dobos, D.; Dobson, E.; Dobson, M.; Dodd, J.; Dogan, O. B.; Doglioni, C.; Doherty, T.; Doi, Y.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B. A.; Dohmae, T.; Donadelli, M.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dosil, M.; Dotti, A.; Dova, M. T.; Dowell, J. D.; Doxiadis, A. D.; Doyle, A. T.; Drasal, Z.; Drees, J.; Dressnandt, N.; Drevermann, H.; Driouichi, C.; Dris, M.; Drohan, J. G.; Dubbert, J.; Dubbs, T.; Dube, S.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Dührssen, M.; Duerdoth, I. P.; Duflot, L.; Dufour, M.-A.; Dunford, M.; Yildiz, H. Duran; Duxfield, R.; Dwuznik, M.; Dydak, F.; Dzahini, D.; Düren, M.; Ebenstein, W. L.; Ebke, J.; Eckert, S.; Eckweiler, S.; Edmonds, K.; Edwards, C. A.; Efthymiopoulos, I.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Ely, R.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienne, F.; Etienvre, A. I.; Etzion, E.; Evangelakou, D.; Evans, H.; Fabbri, L.; Fabre, C.; Facius, K.; Fakhrutdinov, R. M.; Falciano, S.; Falou, A. C.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S. M.; Farthouat, P.; Fasching, D.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Favareto, A.; Fayard, L.; Fazio, S.; Febbraro, R.; Federic, P.; Fedin, O. L.; Fedorko, I.; Fedorko, W.; Fehling-Kaschek, M.; Feligioni, L.; Fellmann, D.; Felzmann, C. U.; Feng, C.; Feng, E. J.; Fenyuk, A. B.; Ferencei, J.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M. L.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, G.; Fischer, P.; Fisher, M. J.; Fisher, S. M.; Flammer, J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L. R.; Flowerdew, M. J.; Föhlisch, F.; Fokitis, M.; Fonseca Martin, T.; Forbush, D. A.; Formica, A.; Forti, A.; Fortin, D.; Foster, J. M.; Fournier, D.; Foussat, A.; Fowler, A. J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Frank, T.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; French, S. T.; Froeschl, R.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E. J.; Gallas, M. V.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galyaev, E.; Gan, K. K.; Gao, Y. S.; Gapienko, V. A.; Gaponenko, A.; Garberson, F.; Garcia-Sciveres, M.; García, C.; García Navarro, J. E.; Gardner, R. W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Garvey, J.; Gatti, C.; Gaudio, G.; Gaumer, O.; Gaur, B.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gayde, J.-C.; Gazis, E. N.; Ge, P.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerlach, P.; Gershon, A.; Geweniger, C.; Ghazlane, H.; Ghez, P.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S. M.; Gieraltowski, G. F.; Gilbert, L. M.; Gilchriese, M.; Gilewsky, V.; Gillberg, D.; Gillman, A. R.; Gingrich, D. M.; Ginzburg, J.; Giokaris, N.; Giordano, R.; Giorgi, F. M.; Giovannini, P.; Giraud, P. F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B. K.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glazov, A.; Glitza, K. W.; Glonti, G. L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Göpfert, T.; Goeringer, C.; Gössling, C.; Göttfert, T.; Goldfarb, S.; Goldin, D.; Golling, T.; Golovnia, S. N.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Goncalves Pinto Firmino da Costa, J.; Gonella, L.; Gonidec, A.; Gonzalez, S.; González de La Hoz, S.; Gonzalez Silva, M. L.; Gonzalez-Sevilla, S.; Goodson, J. J.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Gorokhov, S. A.; Goryachev, V. N.; Gosdzik, B.; Gosselink, M.; Gostkin, M. I.; Gouanère, M.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Grabowska-Bold, I.; Grabski, V.; Grafström, P.; Grah, C.; Grahn, K.-J.; Grancagnolo, F.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H. M.; Gray, J. A.; Graziani, E.; Grebenyuk, O. G.; Greenfield, D.; Greenshaw, T.; Greenwood, Z. D.; Gregor, I. M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A. A.; Grinstein, S.; Gris, P. L. Y.; Grishkevich, Y. V.; Grivaz, J.-F.; Grognuz, J.; Groh, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Gruwe, M.; Grybel, K.; Guarino, V. J.; Guest, D.; Guicheney, C.; Guida, A.; Guillemin, T.; Guindon, S.; Guler, H.; Gunther, J.; Guo, B.; Guo, J.; Gupta, A.; Gusakov, Y.; Gushchin, V. N.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haas, S.; Haber, C.; Hackenburg, R.; Hadavand, H. K.; Hadley, D. R.; Haefner, P.; Hahn, F.; Haider, S.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamal, P.; Hamilton, A.; Hamilton, S.; Han, H.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, C. J.; Hansen, J. R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hansson, P.; Hara, K.; Hare, G. A.; Harenberg, T.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, K.; Hartert, J.; Hartjes, F.; Haruyama, T.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hassani, S.; Hatch, M.; Hauff, D.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawes, B. M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, D.; Hayakawa, T.; Hayden, D.; Hayward, H. S.; Haywood, S. J.; Hazen, E.; He, M.; Head, S. J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heldmann, M.; Heller, M.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Henke, M.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Henry-Couannier, F.; Hensel, C.; Henß, T.; Hernández Jiménez, Y.; Herrberg, R.; Hershenhorn, A. D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N. P.; Hidvegi, A.; Higón-Rodriguez, E.; Hill, D.; Hill, J. C.; Hill, N.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirsch, F.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holder, M.; Holmes, A.; Holmgren, S. O.; Holy, T.; Holzbauer, J. L.; Homma, Y.; Hooft van Huysduynen, L.; Horazdovsky, T.; Horn, C.; Horner, S.; Horton, K.; Hostachy, J.-Y.; Hott, T.; Hou, S.; Houlden, M. A.; Hoummada, A.; Howarth, J.; Howell, D. F.; Hristova, I.; Hrivnac, J.; Hruska, I.; Hryn'ova, T.; Hsu, P. J.; Hsu, S.-C.; Huang, G. S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Hughes-Jones, R. E.; Huhtinen, M.; Hurst, P.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibbotson, M.; Ibragimov, I.; Ichimiya, R.; Iconomidou-Fayard, L.; Idarraga, J.; Idzik, M.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Imbault, D.; Imhaeuser, M.; Imori, M.; Ince, T.; Inigo-Golfin, J.; Ioannou, P.; Iodice, M.; Ionescu, G.; Irles Quiles, A.; Ishii, K.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A. V.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, J. N.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D. K.; Jankowski, E.; Jansen, E.; Jantsch, A.; Janus, M.; Jarlskog, G.; Jeanty, L.; Jelen, K.; Jen-La Plante, I.; Jenni, P.; Jeremie, A.; Jež, P.; Jézéquel, S.; Ji, H.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, G.; Jin, S.; Jinnouchi, O.; Joergensen, M. D.; Joffe, D.; Johansen, L. G.; Johansen, M.; Johansson, K. E.; Johansson, P.; Johnert, S.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. W.; Jones, T. J.; Jonsson, O.; Joram, C.; Jorge, P. M.; Joseph, J.; Ju, X.; Juranek, V.; Jussel, P.; Kabachenko, V. V.; Kabana, S.; Kaci, M.; Kaczmarska, A.; Kadlecik, P.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinin, S.; Kalinovskaya, L. V.; Kama, S.; Kanaya, N.; Kaneda, M.; Kanno, T.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Kar, D.; Karagoz, M.; Karnevskiy, M.; Karr, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasmi, A.; Kass, R. D.; Kastanas, A.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M. S.; Kazanin, V. A.; Kazarinov, M. Y.; Kazi, S. I.; Keates, J. R.; Keeler, R.; Kehoe, R.; Keil, M.; Kekelidze, G. D.; Kelly, M.; Kennedy, J.; Kenney, C. J.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Ketterer, C.; Khakzad, M.; Khalil-Zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Kholodenko, A. G.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kilvington, G.; Kim, H.; Kim, M. S.; Kim, P. C.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; Kirk, J.; Kirsch, G. P.; Kirsch, L. E.; Kiryunin, A. E.; Kisielewska, D.; Kittelmann, T.; Kiver, A. M.; Kiyamura, H.; Kladiva, E.; Klaiber-Lodewigs, J.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E. B.; Klioutchnikova, T.; Klok, P. F.; Klous, S.; Kluge, E.-E.; Kluge, T.; Kluit, P.; Kluth, S.; Kneringer, E.; Knobloch, J.; Knoops, E. B. F. G.; Knue, A.; Ko, B. R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Köneke, K.; König, A. C.; Koenig, S.; König, S.; Köpke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Koi, T.; Kokott, T.; Kolachev, G. M.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kollefrath, M.; Kolya, S. D.; Komar, A. A.; Komaragiri, J. R.; Kondo, T.; Kono, T.; Kononov, A. I.; Konoplich, R.; Konstantinidis, N.; Kootz, A.; Koperny, S.; Kopikov, S. V.; Korcyl, K.; Kordas, K.; Koreshev, V.; Korn, A.; Korol, A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotamäki, M. J.; Kotov, S.; Kotov, V. M.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasel, O.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J.; Kreisel, A.; Krejci, F.; Kretzschmar, J.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumshteyn, Z. V.; Kruth, A.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kundu, N.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurochkin, Y. A.; Kus, V.; Kuykendall, W.; Kuze, M.; Kuzhir, P.; Kvasnicka, O.; Kwee, R.; La Rosa, A.; La Rotonda, L.; Labarga, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laisne, E.; Lamanna, M.; Lampen, C. L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M. P. J.; Landsman, H.; Lane, J. L.; Lange, C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lapin, V. V.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Larionov, A. V.; Larner, A.; Lasseur, C.; Lassnig, M.; Lau, W.; Laurelli, P.; Lavorato, A.; Lavrijsen, W.; Laycock, P.; Lazarev, A. B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Maner, C.; Le Menedeu, E.; Leahu, M.; Lebedev, A.; Lebel, C.; Lecompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lee, L.; Lefebvre, M.; Legendre, M.; Leger, A.; Legeyt, B. C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lellouch, J.; Leltchouk, M.; Lendermann, V.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leontsinis, S.; Leroy, C.; Lessard, J.-R.; Lesser, J.; Lester, C. G.; Leung Fook Cheong, A.; Levêque, J.; Levin, D.; Levinson, L. J.; Levitski, M. S.; Lewandowska, M.; Lewis, G. H.; Leyton, M.; Li, B.; Li, H.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Lifshitz, R.; Lilley, J. N.; Limosani, A.; Limper, M.; Lin, S. C.; Linde, F.; Linnemann, J. T.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, C.; Liu, D.; Liu, H.; Liu, J. B.; Liu, M.; Liu, S.; Liu, Y.; Livan, M.; Livermore, S. S. A.; Lleres, A.; Lloyd, S. L.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F. K.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Loken, J.; Lombardo, V. P.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Lo Sterzo, F.; Losty, M. J.; Lou, X.; Lounis, A.; Loureiro, K. F.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lu, J.; Lu, L.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Ludwig, J.; Luehring, F.; Luijckx, G.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lungwitz, M.; Lupi, A.; Lutz, G.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L. L.; Macana Goia, J. A.; Maccarrone, G.; Macchiolo, A.; Maček, B.; Machado Miguens, J.; Macina, D.; Mackeprang, R.; Madaras, R. J.; Mader, W. F.; Maenner, R.; Maeno, T.; Mättig, P.; Mättig, S.; Magalhaes Martins, P. J.; Magnoni, L.; Magradze, E.; Magrath, C. A.; Mahalalel, Y.; Mahboubi, K.; Mahout, G.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malecki, Pa.; Malecki, P.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mameghani, R.; Mamuzic, J.; Manabe, A.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Mangeard, P. S.; Manjavidze, I. D.; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Manz, A.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J. F.; Marchese, F.; Marchesotti, M.; Marchiori, G.; Marcisovsky, M.; Marin, A.; Marino, C. P.; Marroquim, F.; Marshall, R.; Marshall, Z.; Martens, F. K.; Marti-Garcia, S.; Martin, A. J.; Martin, B.; Martin, B.; Martin, F. F.; Martin, J. P.; Martin, Ph.; Martin, T. A.; Martin Dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Maß, M.; Massa, I.; Massaro, G.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Mathes, M.; Matricon, P.; Matsumoto, H.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maugain, J. M.; Maxfield, S. J.; May, E. N.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; Mazzoni, E.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; McFayden, J. A.; McGlone, H.; McHedlidze, G.; McLaren, R. A.; McLaughlan, T.; McMahon, S. J.; McPherson, R. A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T.; Mehdiyev, R.; Mehlhase, S.; Mehta, A.; Meier, K.; Meinhardt, J.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Mendoza Navas, L.; Meng, Z.; Mengarelli, A.; Menke, S.; Menot, C.; Meoni, E.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meuser, S.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer, J.; Meyer, T. C.; Meyer, W. T.; Miao, J.; Michal, S.; Micu, L.; Middleton, R. P.; Miele, P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikulec, B.; Mikuž, M.; Miller, D. W.; Miller, R. J.; Mills, W. J.; Mills, C.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Miñano, M.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Miralles Verge, L.; Misiejuk, A.; Mitrevski, J.; Mitrofanov, G. Y.; Mitsou, V. A.; Mitsui, S.; Miyagawa, P. S.; Miyazaki, K.; Mjörnmark, J. U.; Moa, T.; Mockett, P.; Moed, S.; Moeller, V.; Mönig, K.; Möser, N.; Mohapatra, S.; Mohn, B.; Mohr, W.; Mohrdieck-Möck, S.; Moisseev, A. M.; Moles-Valls, R.; Molina-Perez, J.; Moneta, L.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Monzani, S.; Moore, R. W.; Moorhead, G. F.; Mora Herrera, C.; Moraes, A.; Morais, A.; Morange, N.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morii, M.; Morin, J.; Morita, Y.; Morley, A. K.; Mornacchi, G.; Morone, M.-C.; Morozov, S. V.; Morris, J. D.; Moser, H. G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Müller, T. A.; Muenstermann, D.; Muijs, A.; Muir, A.; Munwes, Y.; Murakami, K.; Murray, W. J.; Mussche, I.; Musto, E.; Myagkov, A. G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakano, I.; Nanava, G.; Napier, A.; Nash, M.; Nation, N. R.; Nattermann, T.; Naumann, T.; Navarro, G.; Neal, H. A.; Nebot, E.; Nechaeva, P. Yu.; Negri, A.; Negri, G.; Nektarijevic, S.; Nelson, A.; Nelson, S.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Nesterov, S. Y.; Neubauer, M. S.; Neusiedl, A.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nickerson, R. B.; Nicolaidou, R.; Nicolas, L.; Nicquevert, B.; Niedercorn, F.; Nielsen, J.; Niinikoski, T.; Nikiforov, A.; Nikolaenko, V.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nomoto, H.; Nordberg, M.; Nordkvist, B.; Norton, P. R.; Novakova, J.; Nozaki, M.; Nožička, M.; Nugent, I. M.; Nuncio-Quiroz, A.-E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nyman, T.; O'Brien, B. J.; O'Neale, S. W.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Ocariz, J.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohshima, T.; Ohshita, H.; Ohska, T. K.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olcese, M.; Olchevski, A. G.; Oliveira, M.; Oliveira Damazio, D.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Ordonez, G.; Oreglia, M. J.; Orellana, F.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R. S.; Ortega, E. O.; Osculati, B.; Ospanov, R.; Osuna, C.; Otero Y Garzon, G.; Ottersbach, J. P.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Oyarzun, A.; Øye, O. K.; Ozcan, V. E.; Ozturk, N.; Pacheco Pages, A.; Padilla Aranda, C.; Paganis, E.; Paige, F.; Pajchel, K.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Paoloni, A.; Papadelis, A.; Papadopoulou, Th. D.; Paramonov, A.; Park, W.; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pecsy, M.; Pedraza Morales, M. I.; Peleganchuk, S. V.; Peng, H.; Pengo, R.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Cavalcanti, T.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Peric, I.; Perini, L.; Pernegger, H.; Perrino, R.; Perrodo, P.; Persembe, S.; Peshekhonov, V. D.; Peters, O.; Petersen, B. A.; Petersen, J.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D.; Petteni, M.; Pezoa, R.; Phan, A.; Phillips, A. W.; Phillips, P. W.; Piacquadio, G.; Piccaro, E.; Piccinini, M.; Pickford, A.; Piec, S. M.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Ping, J.; Pinto, B.; Pirotte, O.; Pizio, C.; Placakyte, R.; Plamondon, M.; Plano, W. G.; Pleier, M.-A.; Pleskach, A. V.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poggioli, L.; Poghosyan, T.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomarede, D. M.; Pomeroy, D.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Portell Bueso, X.; Porter, R.; Posch, C.; Pospelov, G. E.; Pospisil, S.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Prell, S.; Pretzl, K.; Pribyl, L.; Price, D.; Price, L. E.; Price, M. J.; Prichard, P. M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qian, J.; Qian, Z.; Qin, Z.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Quinonez, F.; Raas, M.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A. M.; Rahm, C.; Rajagopalan, S.; Rajek, S.; Rammensee, M.; Rammes, M.; Ramstedt, M.; Randrianarivony, K.; Ratoff, P. N.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A. L.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reichold, A.; Reinherz-Aronis, E.; Reinsch, A.; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z. L.; Renaud, A.; Renkel, P.; Rensch, B.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richter, R.; Richter-Was, E.; Ridel, M.; Rieke, S.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R. R.; Riu, I.; Rivoltella, G.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robinson, M.; Robson, A.; Rocha de Lima, J. G.; Roda, C.; Roda Dos Santos, D.; Rodier, S.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, A.; Roe, S.; Røhne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V. M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rose, M.; Rosenbaum, G. A.; Rosenberg, E. I.; Rosendahl, P. L.; Rosselet, L.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rossi, L.; Rotaru, M.; Roth, I.; Rothberg, J.; Rottländer, I.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubinskiy, I.; Ruckert, B.; Ruckstuhl, N.; Rud, V. I.; Rudolph, G.; Rühr, F.; Ruiz-Martinez, A.; Rulikowska-Zarebska, E.; Rumiantsev, V.; Rumyantsev, L.; Runge, K.; Runolfsson, O.; Rurikova, Z.; Rusakovich, N. A.; Rust, D. R.; Rutherfoord, J. P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y. F.; Ryadovikov, V.; Ryan, P.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Rzaeva, S.; Saavedra, A. F.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B. M.; Salvatore, D.; Salvatore, F.; Salzburger, A.; Sampsonidis, D.; Samset, B. H.; Sandaker, H.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandhu, P.; Sandoval, T.; Sandstroem, R.; Sandvoss, S.; Sankey, D. P. C.; Sansoni, A.; Santamarina Rios, C.; Santoni, C.; Santonico, R.; Santos, H.; Saraiva, J. G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sartisohn, G.; Sasaki, O.; Sasaki, T.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Sauvan, J. B.; Savard, P.; Savinov, V.; Savu, D. O.; Savva, P.; Sawyer, L.; Saxon, D. H.; Says, L. P.; Sbarra, C.; Sbrizzi, A.; Scallon, O.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schäfer, U.; Schaetzel, S.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schamov, A. G.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schlereth, J. L.; Schmidt, E.; Schmidt, M. P.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schöning, A.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schroeder, C.; Schroer, N.; Schuh, S.; Schuler, G.; Schultes, J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, J. W.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W. G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Seliverstov, D. M.; Sellden, B.; Sellers, G.; Seman, M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M. E.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shank, J. T.; Shao, Q. T.; Shapiro, M.; Shatalov, P. B.; Shaver, L.; Shaw, C.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimizu, S.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M. J.; Short, D.; Shupe, M. A.; Sicho, P.; Sidoti, A.; Siebel, A.; Siegert, F.; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinnari, L. A.; Skovpen, K.; Skubic, P.; Skvorodnev, N.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloan, T. J.; Sloper, J.; Smakhtin, V.; Smirnov, S. Yu.; Smirnova, L. N.; Smirnova, O.; Smith, B. C.; Smith, D.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snow, S. W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C. A.; Solar, M.; Solc, J.; Soldevila, U.; Solfaroli Camillocci, E.; Solodkov, A. A.; Solovyanov, O. V.; Sondericker, J.; Soni, N.; Sopko, V.; Sopko, B.; Sorbi, M.; Sosebee, M.; Soukharev, A.; Spagnolo, S.; Spanò, F.; Spighi, R.; Spigo, G.; Spila, F.; Spiriti, E.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R. D.; Stahl, T.; Stahlman, J.; Stamen, R.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staude, A.; Stavina, P.; Stavropoulos, G.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G. A.; Stillings, J. A.; Stockmanns, T.; Stockton, M. C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A. R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strang, M.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Strong, J. A.; Stroynowski, R.; Strube, J.; Stugu, B.; Stumer, I.; Stupak, J.; Sturm, P.; Soh, D. A.; Su, D.; Subramania, S.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suita, K.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Sviridov, Yu. M.; Swedish, S.; Sykora, I.; Sykora, T.; Szeless, B.; Sánchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Taiblum, N.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M. C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanaka, Y.; Tani, K.; Tannoury, N.; Tappern, G. P.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tassi, E.; Tatarkhanov, M.; Taylor, C.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R. J.; Tevlin, C. M.; Thadome, J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thioye, M.; Thoma, S.; Thomas, J. P.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, A. S.; Thomson, E.; Thomson, M.; Thun, R. P.; Tic, T.; Tikhomirov, V. O.; Tikhonov, Y. A.; Timmermans, C. J. W. P.; Tipton, P.; Tique Aires Viegas, F. J.; Tisserant, S.; Tobias, J.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokunaga, K.; Tokushuku, K.; Tollefson, K.; Tomoto, M.; Tompkins, L.; Toms, K.; Tonazzo, A.; Tong, G.; Tonoyan, A.; Topfel, C.; Topilin, N. D.; Torchiani, I.; Torrence, E.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Traynor, D.; Trefzger, T.; Treis, J.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Trinh, T. N.; Tripiana, M. F.; Triplett, N.; Trischuk, W.; Trivedi, A.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiakiris, M.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsung, J.-W.; Tsuno, S.; Tsybychev, D.; Tua, A.; Tuggle, J. M.; Turala, M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Turra, R.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Typaldos, D.; Tyrvainen, H.; Tzanakos, G.; Uchida, K.; Ueda, I.; Ueno, R.; Ugland, M.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Underwood, D. G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valderanis, C.; Valenta, J.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; van der Graaf, H.; van der Kraaij, E.; van der Leeuw, R.; van der Poel, E.; van der Ster, D.; van Eijk, B.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vandoni, G.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Varela Rodriguez, F.; Vari, R.; Varnes, E. W.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vassilakopoulos, V. I.; Vazeille, F.; Vegni, G.; Veillet, J. J.; Vellidis, C.; Veloso, F.; Veness, R.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Vichou, I.; Vickey, T.; Viehhauser, G. H. A.; Viel, S.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinek, E.; Vinogradov, V. B.; Virchaux, M.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O.; Viti, M.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogel, A.; Vokac, P.; Volpi, M.; Volpini, G.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobiev, A. P.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T. T.; Vossebeld, J. H.; Vovenko, A. S.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Wagner, W.; Wagner, P.; Wahlen, H.; Wakabayashi, J.; Walbersloh, J.; Walch, S.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Wang, C.; Wang, H.; Wang, J.; Wang, J.; Wang, J. C.; Wang, R.; Wang, S. M.; Warburton, A.; Ward, C. P.; Warsinsky, M.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, A. T.; Waugh, B. M.; Weber, J.; Weber, M.; Weber, M. S.; Weber, P.; Weidberg, A. R.; Weigell, P.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P. S.; Wen, M.; Wenaus, T.; Wendler, S.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Wessels, M.; Whalen, K.; Wheeler-Ellis, S. J.; Whitaker, S. P.; White, A.; White, M. J.; White, S.; Whitehead, S. R.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilhelm, I.; Wilkens, H. G.; Will, J. Z.; Williams, E.; Williams, H. H.; Willis, W.; Willocq, S.; Wilson, J. A.; Wilson, M. G.; Wilson, A.; Wingerter-Seez, I.; Winkelmann, S.; Winklmeier, F.; Wittgen, M.; Wolter, M. W.; Wolters, H.; Wooden, G.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wraight, K.; Wright, C.; Wrona, B.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wunstorf, R.; Wynne, B. M.; Xaplanteris, L.; Xella, S.; Xie, S.; Xie, Y.; Xu, C.; Xu, D.; Xu, G.; Yabsley, B.; Yamada, M.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U. K.; Yang, Y.; Yang, Y.; Yang, Z.; Yanush, S.; Yao, W.-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S.; Yu, D.; Yu, J.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zaets, V. G.; Zaidan, R.; Zaitsev, A. M.; Zajacova, Z.; Zalite, Yo. K.; Zanello, L.; Zarzhitsky, P.; Zaytsev, A.; Zeitnitz, C.; Zeller, M.; Zema, P. F.; Zemla, A.; Zendler, C.; Zenin, A. V.; Zenin, O.; Ženiš, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi Della Porta, G.; Zhan, Z.; Zhang, D.; Zhang, H.; Zhang, J.; Zhang, X.; Zhang, Z.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zheng, S.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C. G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zieminska, D.; Zilka, B.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Zitoun, R.; Živković, L.; Zmouchko, V. V.; Zobernig, G.; Zoccoli, A.; Zolnierowski, Y.; Zsenei, A.; Zur Nedden, M.; Zutshi, V.; Zwalinski, L.; Atlas Collaboration
2011-06-01
Hitherto unobserved long-lived massive particles with electric and/or colour charge are predicted by a range of theories which extend the Standard Model. In this Letter a search is performed at the ATLAS experiment for slow-moving charged particles produced in proton-proton collisions at 7 TeV centre-of-mass energy at the LHC, using a data-set corresponding to an integrated luminosity of 34 pb-1. No deviations from Standard Model expectations are found. This result is interpreted in a framework of supersymmetry models in which coloured sparticles can hadronise into long-lived bound hadronic states, termed R-hadrons, and 95% CL limits are set on the production cross-sections of squarks and gluinos. The influence of R-hadron interactions in matter was studied using a number of different models, and lower mass limits for stable sbottoms and stops are found to be 294 and 309 GeV respectively. The lower mass limit for a stable gluino lies in the range from 562 to 586 GeV depending on the model assumed. Each of these constraints is the most stringent to date.
Heavy-flavour and quarkonium production in the LHC era: from proton-proton to heavy-ion collisions
Andronic, A.; Arleo, F.; Arnaldi, R.; ...
2016-02-29
This report reviews the study of open heavy-flavour and quarkonium production in high-energy hadronic collisions, as tools to investigate fundamental aspects of Quantum Chromodynamics, from the proton and nucleus structure at high energy to deconfinement and the properties of the Quark-Gluon Plasma. Emphasis is given to the lessons learnt from LHC Run 1 results, which are reviewed in a global picture with the results from SPS and RHIC at lower energies, as well as to the questions to be addressed in the future. The report covers heavy flavour and quarkonium production in proton-proton, proton-nucleus and nucleus-nucleus collisions. This includes discussionmore » of the effects of hot and cold strongly interacting matter, quarkonium photo-production in nucleus-nucleus collisions and perspectives on the study of heavy flavour and quarkonium with upgrades of existing experiments and new experiments. The report results from the activity of the SaporeGravis network of the I3 Hadron Physics programme of the European Union 7th Framework Programme.« less
Supersymmetric Dark Matter after LHC Run 1
Bagnaschi, E. A.; Buchmueller, O.; Cavanaugh, R.; ...
2015-10-23
Different mechanisms operate in various regions of the MSSM parameter space to bring the relic density of the lightest neutralino, χ ~0 1, assumed here to be the lightest SUSY particle (LSP) and thus the dark matter (DM) particle, into the range allowed by astrophysics and cosmology. These mechanisms include coannihilation with some nearly degenerate next-to-lightest supersymmetric particle such as the lighter stau τ ~ 1, stop t ~ 1 or chargino χ ~± 1, resonant annihilation via direct-channel heavy Higgs bosons H / A, the light Higgs boson h or the Z boson, and enhanced annihilation via a largermore » Higgsino component of the LSP in the focus-point region. These mechanisms typically select lower-dimensional subspaces in MSSM scenarios such as the CMSSM, NUHM1, NUHM2, and pMSSM10. We analyze how future LHC and direct DM searches can complement each other in the exploration of the different DM mechanisms within these scenarios. We find that the τ~1 coannihilation regions of the CMSSM, NUHM1, NUHM2 can largely be explored at the LHC via searches for /E T events and long-lived charged particles, whereas theirH / A funnel, focus-point and χ ~± 1 coannihilation regions can largely be explored by the LZ and Darwin DM direct detection experiments. Furthermore, we find that the dominant DM mechanism in our pMSSM10 analysis is χ ~ ±1 coannihilation: parts of its parameter space can be explored by the LHC, and a larger portion by future direct DM searches.« less
Supersymmetric dark matter after LHC run 1
NASA Astrophysics Data System (ADS)
Bagnaschi, E. A.; Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Malik, S.; Martínez Santos, D.; Olive, K. A.; Sakurai, K.; de Vries, K. J.; Weiglein, G.
2015-10-01
Different mechanisms operate in various regions of the MSSM parameter space to bring the relic density of the lightest neutralino, tilde{χ }^01, assumed here to be the lightest SUSY particle (LSP) and thus the dark matter (DM) particle, into the range allowed by astrophysics and cosmology. These mechanisms include coannihilation with some nearly degenerate next-to-lightest supersymmetric particle such as the lighter stau tilde{τ }1, stop tilde{t}1 or chargino tilde{χ }^± 1, resonant annihilation via direct-channel heavy Higgs bosons H / A, the light Higgs boson h or the Z boson, and enhanced annihilation via a larger Higgsino component of the LSP in the focus-point region. These mechanisms typically select lower-dimensional subspaces in MSSM scenarios such as the CMSSM, NUHM1, NUHM2, and pMSSM10. We analyze how future LHC and direct DM searches can complement each other in the exploration of the different DM mechanisms within these scenarios. We find that the {tilde{τ }_1} coannihilation regions of the CMSSM, NUHM1, NUHM2 can largely be explored at the LHC via searches for / E_T events and long-lived charged particles, whereas their H / A funnel, focus-point and tilde{χ }^± 1 coannihilation regions can largely be explored by the LZ and Darwin DM direct detection experiments. We find that the dominant DM mechanism in our pMSSM10 analysis is tilde{χ }^± 1 coannihilation: parts of its parameter space can be explored by the LHC, and a larger portion by future direct DM searches.
NASA Astrophysics Data System (ADS)
Ajaz, M.; Ullah, S.; Ali, Y.; Younis, H.
2018-02-01
In this research paper, the comprehensive results on the double differential yield of π± and k± mesons, protons and antiprotons as a function of laboratory momentum are reported. These hadrons are produced in proton-carbon interaction at 60 GeV/c. EPOS 1.99, EPOS-LHC and QGSJETII-04 models are used to perform simulations. Comparing the predictions of these models show that QGSJETII-04 model predicts higher yields of all the hadrons in most of the cases at the peak of the distribution. In this interval, the EPOS 1.99 and EPOS-LHC produce similar results. In most of the cases at higher momentum of the hadrons, all the three models are in good agreement. For protons, all models are in good agreement. EPOS-LHC gives high yield of antiprotons at high momentum values as compared to the other two models. EPOS-LHC gives higher prediction at the peak value for π+ mesons and protons at higher polar angle intervals of 100 < 𝜃 < 420 and 100 < 𝜃 < 360, respectively, and EPOS 1.99 gives higher prediction at the peak value for π- mesons for 140 < 𝜃 < 420. The model predictions, except for antiprotons, are compared with the data obtained by the NA61/SHINE experiment at 31 GeV/c proton-carbon collision, which clearly shows that the behavior of the distributions in models are similar to the ones from the data but the yield in data is low because of lower beam energy.
CMS tier structure and operation of the experiment-specific tasks in Germany
NASA Astrophysics Data System (ADS)
Nowack, A.
2008-07-01
In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators.
Extending the farm on external sites: the INFN Tier-1 experience
NASA Astrophysics Data System (ADS)
Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.
2017-10-01
The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.
The Hottest, and Most Liquid, Liquid in the Universe
NASA Astrophysics Data System (ADS)
Rajagopal, Krishna
2012-03-01
What was the universe like microseconds after the big bang? At very high temperatures, protons and neutrons fall apart --- the quarks that are ordinarily confined within them are freed. Before experiments at the Relativistic Heavy Ion Collider started recreating little droplets of big bang matter, it was thought to be a tenuous gas-like plasma. Now we know from experiments at RHIC and at the Large Hadron Collider that at these extreme temperatures nature serves up hot quark soup --- the hottest liquid in the universe and the liquid that flows with the least dissipation. The only other comparably liquid liquid is the coldest liquid in the universe, namely the fluid made of trapped fermionic atoms at microKelvin rather than TeraKelvin temperatures. These are two examples of strongly coupled fluids without any apparent quasiparticle description, a feature that they share with other phases of matter like the strange metal phase of the cuprate superconductors that aren't conventionally thought of as liquids but that are equally challenging to understand. I will describe how physicists are using RHIC and LHC experiments --- as well as calculations done using dualities between liquids and black holes discovered in string theory --- to discern the properties of hot quark soup. In this domain, string theory is answering questions posed by laboratory experiments. I will describe the opportunities and challenges for coming experiments at RHIC and the LHC, chief among them being understanding how a liquid with no apparent particulate description emerges from quarks and gluons.
An experiment in software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.
NASA Astrophysics Data System (ADS)
Aboubrahim, Amin; Nath, Pran
2017-10-01
We investigate the possibility of testing supergravity unified models with scalar masses in the range 50-100 TeV and much lighter gaugino masses at the Large Hadron Collider. The analysis is carried out under the constraints that models produce the Higgs boson mass consistent with experiment and also produce dark matter consistent with WMAP and PLANCK experiments. A set of benchmarks in the supergravity parameter space are investigated using a combination of signal regions which are optimized for the model set. It is found that some of the models with scalar masses in the 50-100 TeV mass range are discoverable with as little as 100 fb-1 of integrated luminosity and should be accessible at the LHC RUN II. The remaining benchmark models are found to be discoverable with less than 1000 fb-1 of integrated luminosity and thus testable in the high luminosity era of the LHC, i.e., at HL-LHC. It is shown that scalar masses in the 50-100 TeV range but gaugino masses much lower in mass produce unification of gauge coupling constants, consistent with experimental data at low scale, with as good an accuracy (and sometimes even better) as models with low [O (1 ) TeV ] weak scale supersymmetry. Decay of the gravitinos for the supergravity model benchmarks are investigated and it is shown that they decay before the big bang nucleosynthesis (BBN). Further, we investigate the nonthermal production of neutralinos from gravitino decay and it is found that the nonthermal contribution to the dark matter relic density is negligible relative to that from the thermal production of neutralinos for reheat temperature after inflation up to 1 09 GeV . An analysis of the direct detection of dark matter for supergravity grand unified models (SUGRA) with high scalar masses is also discussed. SUGRA models with scalar masses in the range 50-100 TeV have several other attractive features such as they help alleviate the supersymmetric C P problem and help suppress proton decay from baryon and lepton number violating dimension five operators.
High Energy Colliders and Hidden Sectors
NASA Astrophysics Data System (ADS)
Dror, Asaf Jeff
This thesis explores two dominant frontiers of theoretical physics, high energy colliders and hidden sectors. The Large Hadron Collider (LHC) is just starting to reach its maximum operational capabilities. However, already with the current data, large classes of models are being put under significant pressure. It is crucial to understand whether the (thus far) null results are a consequence of a lack of solution to the hierarchy problem around the weak scale or requires expanding the search strategy employed at the LHC. It is the duty of the current generation of physicists to design new searches to ensure that no stone is left unturned. To this end, we study the sensitivity of the LHC to the couplings in the Standard Model top sector. We find it can significantly improve the measurements on ZtRtR coupling by a novel search strategy, making use of an implied unitarity violation in such models. Analogously, we show that other couplings in the top sector can also be measured with the same technique. Furthermore, we critically analyze a set of anomalies in the LHC data and how they may appear from consistent UV completions. We also propose a technique to measure lifetimes of new colored particles with non-trivial spin. While the high energy frontier will continue to take data, it is likely the only collider of its kind for the next couple decades. On the other hand, low-energy experiments have a promising future with many new proposed experiments to probe the existence of particles well below the weak scale but with small couplings to the Standard Model. In this work we survey the different possibilities, focusingon the constraints as well as possible new hidden sector dynamics. In particular, we show that vector portals which couple to an anomalous current, e.g., baryon number, are significantly constrained from flavor changing meson decays and rare Z decays. Furthermore, we present a new mechanism for dark matter freezeout which depletes the dark sector through an out-of-equilibrium decay into the Standard Model.
Proceedings of the Ninth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1984-01-01
Experiences in measurement, utilization, and evaluation of software methodologies, models, and tools are discussed. NASA's involvement in ever larger and more complex systems, like the space station project, provides a motive for the support of software engineering research and the exchange of ideas in such forums. The topics of current SEL research are software error studies, experiments with software development, and software tools.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.