Progress in Machine Learning Studies for the CMS Computing Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...
2017-12-06
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Grid site availability evaluation and monitoring at CMS
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Grid site availability evaluation and monitoring at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Grid site availability evaluation and monitoring at CMS
NASA Astrophysics Data System (ADS)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Multicore job scheduling in the Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.
2015-12-01
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
Processing of the WLCG monitoring data using NoSQL
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Gutsche, O.
The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.
CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme
NASA Astrophysics Data System (ADS)
Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.
2017-10-01
LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.
Evolution of the ATLAS distributed computing system during the LHC long shutdown
NASA Astrophysics Data System (ADS)
Campana, S.; Atlas Collaboration
2014-06-01
The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre
2016-02-01
After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.
Data Reprocessing on Worldwide Distributed Systems
NASA Astrophysics Data System (ADS)
Wicke, Daniel
The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Distributing and storing data efficiently by means of special datasets in the ATLAS collaboration
NASA Astrophysics Data System (ADS)
Köneke, Karsten; ATLAS Collaboration
2011-12-01
With the start of the LHC physics program, the ATLAS experiment started to record vast amounts of data. This data has to be distributed and stored on the world-wide computing grid in a smart way in order to enable an effective and efficient analysis by physicists. This article describes how the ATLAS collaboration chose to create specialized reduced datasets in order to efficiently use computing resources and facilitate physics analyses.
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...
2017-10-01
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
NASA Astrophysics Data System (ADS)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea
2017-10-01
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
How Much Higher Can HTCondor Fly?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fajardo, E. M.; Dost, J. M.; Holzman, B.
The HTCondor high throughput computing system is heavily used in the high energy physics (HEP) community as the batch system for several Worldwide LHC Computing Grid (WLCG) resources. Moreover, it is the backbone of GlidelnWMS, the pilot system used by the computing organization of the Compact Muon Solenoid (CMS) experiment. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool.In this paper, we first describe how we created an opportunistic distributed testbed capable of exercising runsmore » with 200,000 simultaneous jobs without impacting production. This testbed methodology is appropriate not only for scale testing HTCondor, but potentially for many other services. In addition to the test conditions and the testbed topology, we include the suggested configuration options used to obtain the scaling results, and describe some of the changes to HTCondor inspired by our testing that enabled sustained operations at scales well beyond previous limits.« less
A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era
NASA Astrophysics Data System (ADS)
Newman, H.
2011-12-01
Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform the continent.
Technologies for Large Data Management in Scientific Computing
NASA Astrophysics Data System (ADS)
Pace, Alberto
2014-01-01
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
Towards a Global Service Registry for the World-Wide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro
2014-06-01
The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.
Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid
NASA Astrophysics Data System (ADS)
Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration
2014-06-01
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
CMS Centres Worldwide - a New Collaborative Infrastructure
NASA Astrophysics Data System (ADS)
Taylor, Lucas
2011-12-01
The CMS Experiment at the LHC has established a network of more than fifty inter-connected "CMS Centres" at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running "telepresence" video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.
Volunteer Clouds and Citizen Cyberscience for LHC Physics
NASA Astrophysics Data System (ADS)
Aguado Sanchez, Carlos; Blomer, Jakob; Buncic, Predrag; Chen, Gang; Ellis, John; Garcia Quintas, David; Harutyunyan, Artem; Grey, Francois; Lombrana Gonzalez, Daniel; Marquina, Miguel; Mato, Pere; Rantala, Jarno; Schulz, Holger; Segal, Ben; Sharma, Archana; Skands, Peter; Weir, David; Wu, Jie; Wu, Wenjing; Yadav, Rohit
2011-12-01
Computing for the LHC, and for HEP more generally, is traditionally viewed as requiring specialized infrastructure and software environments, and therefore not compatible with the recent trend in "volunteer computing", where volunteers supply free processing time on ordinary PCs and laptops via standard Internet connections. In this paper, we demonstrate that with the use of virtual machine technology, at least some standard LHC computing tasks can be tackled with volunteer computing resources. Specifically, by presenting volunteer computing resources to HEP scientists as a "volunteer cloud", essentially identical to a Grid or dedicated cluster from a job submission perspective, LHC simulations can be processed effectively. This article outlines both the technical steps required for such a solution and the implications for LHC computing as well as for LHC public outreach and for participation by scientists from developing regions in LHC research.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
Evolution of grid-wide access to database resident information in ATLAS using Frontier
NASA Astrophysics Data System (ADS)
Barberis, D.; Bujor, F.; de Stefano, J.; Dewhurst, A. L.; Dykstra, D.; Front, D.; Gallas, E.; Gamboa, C. F.; Luehring, F.; Walker, R.
2012-12-01
The ATLAS experiment deployed Frontier technology worldwide during the initial year of LHC collision data taking to enable user analysis jobs running on the Worldwide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken, such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond user analysis and subsystem specific tasks such as calibration and alignment, extending into production processing areas, such as initial reconstruction and trigger reprocessing. With a more robust and tuned system, we are better equipped to satisfy the still growing number of diverse clients and the demands of increasingly sophisticated processing and analysis.
LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN
NASA Astrophysics Data System (ADS)
Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor
2017-12-01
The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.
Future computing platforms for science in a power constrained era
Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; ...
2015-12-23
Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less
Challenging data and workload management in CMS Computing with network-aware systems
NASA Astrophysics Data System (ADS)
D, Bonacorsi; T, Wildish
2014-06-01
After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.
A world-wide databridge supported by a commercial cloud provider
NASA Astrophysics Data System (ADS)
Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio
2017-10-01
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
Green, Dan
2016-12-14
The demise of the SSC in the U.S. created an upheaval in the U.S. high energy physics (HEP) community. Here, the subsequent redirection of HEP efforts to the CERN Large Hadron Collider (LHC) can perhaps be seen as informing on possible future paths for worldwide collaboration on future HEP megaprojects.
Monitoring techniques and alarm procedures for CMS services and sites in WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.
2012-01-01
The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
PanDA for ATLAS distributed computing in the next decade
NASA Astrophysics Data System (ADS)
Barreiro Megino, F. H.; De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarded in favor of a more automated and scalable model. Workloads are dynamically tailored for optimal usage of resources, with the brokerage taking network traffic and forecasts into account. Computing resources are partitioned based on dynamic knowledge of their status and characteristics. The pilot has been re-factored around a plugin structure for easier development and deployment. Bookkeeping is handled with both coarse and fine granularities for efficient utilization of pledged or opportunistic resources. An in-house security mechanism authenticates the pilot and data management services in off-grid environments such as volunteer computing and private local clusters. The PanDA monitor has been extensively optimized for performance and extended with analytics to provide aggregated summaries of the system as well as drill-down to operational details. There are as well many other challenges planned or recently implemented, and adoption by non-LHC experiments such as bioinformatics groups successfully running Paleomix (microbial genome and metagenomes) payload on supercomputers. In this paper we will focus on the new and planned features that are most important to the next decade of distributed computing workload management.
Will there be energy frontier colliders after LHC?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir
2016-09-15
High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here we overview all current options for post-LHC collidersmore » from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics.« less
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
The GÉANT network: addressing current and future needs of the HEP community
NASA Astrophysics Data System (ADS)
Capone, Vincenzo; Usman, Mian
2015-12-01
The GÉANT infrastructure is the backbone that serves the scientific communities in Europe for their data movement needs and their access to international research and education networks. Using the extensive fibre footprint and infrastructure in Europe the GÉANT network delivers a portfolio of services aimed to best fit the specific needs of the users, including Authentication and Authorization Infrastructure, end-to-end performance monitoring, advanced network services (dynamic circuits, L2-L3VPN, MD-VPN). This talk will outline the factors that help the GÉANT network to respond to the needs of the High Energy Physics community, both in Europe and worldwide. The Pan-European network provides the connectivity between 40 European national research and education networks. In addition, GÉANT also connects the European NRENs to the R&E networks in other world region and has reach to over 110 NREN worldwide, making GÉANT the best connected Research and Education network, with its multiple intercontinental links to different continents e.g. North and South America, Africa and Asia-Pacific. The High Energy Physics computational needs have always had (and will keep having) a leading role among the scientific user groups of the GÉANT network: the LHCONE overlay network has been built, in collaboration with the other big world REN, specifically to address the peculiar needs of the LHC data movement. Recently, as a result of a series of coordinated efforts, the LHCONE network has been expanded to the Asia-Pacific area, and is going to include some of the main regional R&E network in the area. The LHC community is not the only one that is actively using a distributed computing model (hence the need for a high-performance network); new communities are arising, as BELLE II. GÉANT is deeply involved also with the BELLE II Experiment, to provide full support to their distributed computing model, along with a perfSONAR-based network monitoring system. GÉANT has also coordinated the setup of the network infrastructure to perform the BELLE II Trans-Atlantic Data Challenge, and has been active on helping the BELLE II community to sort out their end-to-end performance issues. In this talk we will provide information about the current GÉANT network architecture and of the international connectivity, along with the upcoming upgrades and the planned and foreseeable improvements. We will also describe the implementation of the solutions provided to support the LHC and BELLE II experiments.
Building analytical platform with Big Data solutions for log files of PanDA infrastructure
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.
2018-05-01
The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
CERN Computing in Commercial Clouds
NASA Astrophysics Data System (ADS)
Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.
2017-10-01
By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.
2015-12-01
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for how to tune the initial distribution of data in anticipation of how it will be used in Run-2 and beyond.
Considerations on Energy Frontier Colliders after LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiltsev, Vladimir
2016-11-15
Since 1960’s, particle colliders have been in the forefront of particle physics, 29 total have been built and operated, 7 are in operation now. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here wemore » overview all current options for post-LHC colliders from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics. This paper largely follows previous study [1] and the presenta ion given at the ICHEP’2016 conference in Chicago [2].« less
WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.
Challenges to Software/Computing for Experimentation at the LHC
NASA Astrophysics Data System (ADS)
Banerjee, Sunanda
The demands of future high energy physics experiments towards software and computing have led the experiments to plan the related activities as a full-fledged project and to investigate new methodologies and languages to meet the challenges. The paths taken by the four LHC experiments ALICE, ATLAS, CMS and LHCb are coherently put together in an LHC-wide framework based on Grid technology. The current status and understandings have been broadly outlined.
Improved ATLAS HammerCloud Monitoring for Local Site Administration
NASA Astrophysics Data System (ADS)
Böhler, M.; Elmsheuser, J.; Hönig, F.; Legger, F.; Mancinelli, V.; Sciacca, G.
2015-12-01
Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimizations of the HammerCloud web interface and gives an overview of typical use cases.
Integration of PanDA workload management system with Titan supercomputer at OLCF
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
WLCG and IPv6 - The HEPiX IPv6 working group
Campana, S.; K. Chadwick; Chen, G.; ...
2014-06-11
The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applyingmore » for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). Furthermore, this paper describes the work done by the working group and its future plans.« less
WLCG and IPv6 - the HEPiX IPv6 working group
NASA Astrophysics Data System (ADS)
Campana, S.; Chadwick, K.; Chen, G.; Chudoba, J.; Clarke, P.; Eliáš, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; Grigoras, C.; Hoeft, B.; Kelsey, D. P.; Kouba, T.; López Muñoz, F.; Martelli, E.; Mitchell, M.; Nairz, A.; Ohrenberg, K.; Pfeiffer, A.; Prelz, F.; Qi, F.; Rand, D.; Reale, M.; Rozsa, S.; Sciaba, A.; Voicu, R.; Walker, C. J.; Wildish, T.
2014-06-01
The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.
Exploiting analytics techniques in CMS computing monitoring
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.
2017-10-01
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.
Operational Experience with the Frontier System in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter
2012-06-20
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less
Operational Experience with the Frontier System in CMS
NASA Astrophysics Data System (ADS)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen
2012-12-01
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Box, D.; Boyd, J.; Di Benedetto, V.
2016-01-01
The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less
Grid today, clouds on the horizon
NASA Astrophysics Data System (ADS)
Shiers, Jamie
2009-04-01
By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.
Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.
2013-01-01
The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163
The Machine / Job Features Mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alef, M.; Cass, T.; Keijser, J. J.
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less
The machine/job features mechanism
NASA Astrophysics Data System (ADS)
Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.
2017-10-01
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.
CMS results in the Combined Computing Readiness Challenge CCRC'08
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Bauerdick, L.; CMS Collaboration
2009-12-01
During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.
Exploiting Analytics Techniques in CMS Computing Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less
Lincoln, Don
2018-01-16
The LHC is the worldâs highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilabâs Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.
CILogon-HA. Higher Assurance Federated Identities for DOE Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basney, James
The CILogon-HA project extended the existing open source CILogon service (initially developed with funding from the National Science Foundation) to provide credentials at multiple levels of assurance to users of DOE facilities for collaborative science. CILogon translates mechanism and policy across higher education and grid trust federations, bridging from the InCommon identity federation (which federates university and DOE lab identities) to the Interoperable Global Trust Federation (which defines standards across the Worldwide LHC Computing Grid, the Open Science Grid, and other cyberinfrastructure). The CILogon-HA project expanded the CILogon service to support over 160 identity providers (including 6 DOE facilities) andmore » 3 internationally accredited certification authorities. To provide continuity of operations upon the end of the CILogon-HA project period, project staff transitioned the CILogon service to operation by XSEDE.« less
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Design of superconducting corrector magnets for LHC
NASA Astrophysics Data System (ADS)
Baynham, D. E.; Coombs, R. C.; Ijspeert, A.; Perin, R.
1994-07-01
The Large Hadron Collider (LHC) will require a range of superconducting corrector magnets. This paper presents the design of sextupole and decapole corrector coils which will be included as spool pieces adjacent to each main ring dipole. The paper gives detailed 3D field computations of the coil configurations to meet LHC beam dynamics requirements. Coil protection within a long string environment is addressed and mechanical design outlines are presented.
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
A review of advances in pixel detectors for experiments with high rate and radiation
NASA Astrophysics Data System (ADS)
Garcia-Sciveres, Maurice; Wermes, Norbert
2018-06-01
The large Hadron collider (LHC) experiments ATLAS and CMS have established hybrid pixel detectors as the instrument of choice for particle tracking and vertexing in high rate and radiation environments, as they operate close to the LHC interaction points. With the high luminosity-LHC upgrade now in sight, for which the tracking detectors will be completely replaced, new generations of pixel detectors are being devised. They have to address enormous challenges in terms of data throughput and radiation levels, ionizing and non-ionizing, that harm the sensing and readout parts of pixel detectors alike. Advances in microelectronics and microprocessing technologies now enable large scale detector designs with unprecedented performance in measurement precision (space and time), radiation hard sensors and readout chips, hybridization techniques, lightweight supports, and fully monolithic approaches to meet these challenges. This paper reviews the world-wide effort on these developments.
W.K.H. Panofsky Prize: The Long Journey to the Higgs Boson: ATLAS
NASA Astrophysics Data System (ADS)
Jenni, Peter
2017-01-01
The discovery of the Higgs boson announced in July 2012 by ATLAS and CMS was a culminating point for a very long journey in the realization of the LHC project. Building up the experimental programme with this unique high-energy collider, and developing the very sophisticated detectors built and operated by world-wide collaborations, meant a fabulous scientific and human adventure spanning more than three decades. This talk will recall the initial motivation for the project, tracing its history, as well as illustrate some of the many milestones that finally led to the rich harvest of physics so far. The talk will focus on the ATLAS experiment, including also new, very recent results from the ongoing 13 TeV Run-2 of LHC. And this is only the beginning of this fantastic journey into unchartered physics territory with the LHC.
BigData and computing challenges in high energy and nuclear physics
NASA Astrophysics Data System (ADS)
Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.
2017-06-01
In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''
LHCb experience with LFC replication
NASA Astrophysics Data System (ADS)
Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.
2008-07-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
Grid Computing Environment using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Alanis, Fransisco; Mahmood, Akhtar
2003-10-01
Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.
2017-10-01
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.
Studies of QCD structure in high-energy collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadolsky, Pavel M.
2016-06-26
”Studies of QCD structure in high-energy collisions” is a research project in theoretical particle physics at Southern Methodist University funded by US DOE Award DE-SC0013681. The award furnished bridge funding for one year (2015/04/15-2016/03/31) between the periods funded by Nadolsky’s DOE Early Career Research Award DE-SC0003870 (in 2010-2015) and a DOE grant DE-SC0010129 for SMU Department of Physics (starting in April 2016). The primary objective of the research is to provide theoretical predictions for Run-2 of the CERN Large Hadron Collider (LHC). The LHC physics program relies on state-of-the-art predictions in the field of quantum chromodynamics. The main effort ofmore » our group went into the global analysis of parton distribution functions (PDFs) employed by the bulk of LHC computations. Parton distributions describe internal structure of protons during ultrarelivistic collisions. A new generation of CTEQ parton distribution functions (PDFs), CT14, was released in summer 2015 and quickly adopted by the HEP community. The new CT14 parametrizations of PDFs were obtained using benchmarked NNLO calculations and latest data from LHC and Tevatron experiments. The group developed advanced methods for the PDF analysis and estimation of uncertainties in LHC predictions associated with the PDFs. We invented and refined a new ’meta-parametrization’ technique that streamlines usage of PDFs in Higgs boson production and other numerous LHC processes, by combining PDFs from various groups using multivariate stochastic sampling. In 2015, the PDF4LHC working group recommended to LHC experimental collaborations to use ’meta-parametrizations’ as a standard technique for computing PDF uncertainties. Finally, to include new QCD processes into the global fits, our group worked on several (N)NNLO calculations.« less
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
Optimising LAN access to grid enabled storage elements
NASA Astrophysics Data System (ADS)
Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.
2008-07-01
When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.
Your Higgs number - how fundamental physics is connected to technology and societal revolutions
NASA Astrophysics Data System (ADS)
Lidström, Suzy; Allen, Roland E.
2015-03-01
Fundamental physics, as exemplified by the recently discovered Higgs boson, often appears to be completely disconnected from practical applications and ordinary human life. But this is not really the case, because science, technology, and human affairs are profoundly integrated in ways that are not immediately obvious. We illustrate this by defining a ``Higgs number'' through overlapping activities. Following three different paths, which end respectively in applications of the World Wide Web, digital photography, and modern electronic devices, we find that most people have a Higgs number of no greater than 3. Specific examples chosen for illustration, with their assigned Higgs numbers, are: LHC experimentalists employing the Worldwide Computing Grid (0) - Timothy Berners-Lee (1) - Marissa Mayer, of Google and Yahoo, and Sheryl Sandberg, of Facebook (2) - users of all web-based enterprises (3). CMS and ATLAS experimentalists (0) - particle detector developers (1) - inventors of CCDs and active-pixel sensors (2) - users of digital cameras and camcorders (3). Philip Anderson (0) - John Bardeen (1) - Jack Kilby (2) - users of personal computers, mobile phones, and all other modern electronic devices (3).
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Pc as Physics Computer for Lhc ?
NASA Astrophysics Data System (ADS)
Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.
In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
NASA Astrophysics Data System (ADS)
Varela Rodriguez, F.
2011-12-01
The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.
A critical appraisal and evaluation of modern PDFs
Accardi, A.; Alekhin, S.; Blumlein, J.; ...
2016-08-23
Here, we review the present status in the determination of parton distribution functions (PDFs) in the light of the precision requirements for the LHC in Run 2 as well as other future colliders. We provide brief descriptions of all currently available PDF sets and use them to compute cross sections for a number of benchmark processes including the Higgs boson production in gluon-gluon fusion at the LHC. We show that the differences in the predictions obtained with the various PDFs are due to particular theory assumptions such as the heavy-flavor schemes used in the PDF fits, the account of powermore » corrections and others. We comment on PDF uncertainties in the kinematic region covered by the LHC and on averaging procedures for PDFs, such as realized by the PDF4LHC15 sets. As a result, we provide recommendations for the usage of sets of PDFs for theory predictions at the LHC.« less
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Storage element performance optimization for CMS analysis jobs
NASA Astrophysics Data System (ADS)
Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.
2012-12-01
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance is good.
Use of DAGMan in CRAB3 to Improve the Splitting of CMS User Jobs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, M.; Mascheroni, M.; Woodard, A.
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distributemore » the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.« less
Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs
NASA Astrophysics Data System (ADS)
Wolf, M.; Mascheroni, M.; Woodard, A.; Belforte, S.; Bockelman, B.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.
Integration of end-user Cloud storage for CMS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
Integration of end-user Cloud storage for CMS analysis
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...
2017-05-19
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
NASA Astrophysics Data System (ADS)
Sánchez-Martínez, V.; Borges, G.; Borrego, C.; del Peso, J.; Delfino, M.; Gomes, J.; González de la Hoz, S.; Pacheco Pages, A.; Salt, J.; Sedov, A.; Villaplana, M.; Wolters, H.
2014-06-01
In this contribution we describe the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the context of the GRID Computing and Data Distribution Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier-1 and Tier-2s is summarized. The data distribution over all ATLAS destinations is shown, focusing on the number of files transferred and the size of the data. The status and distribution of simulation and analysis jobs within the cloud are discussed. The Distributed Analysis tools used to perform physics analysis are explained as well. Cloud performance in terms of the availability and reliability of its sites is discussed. The effect of the changes in the ATLAS Computing Model on the cloud is analyzed. Finally, the readiness of the Iberian Cloud towards the first Long Shutdown (LS1) is evaluated and an outline of the foreseen actions to take in the coming years is given. The shutdown will be a good opportunity to improve and evolve the ATLAS Distributed Computing system to prepare for the future challenges of the LHC operation.
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
ATLAS, CMS and new challenges for public communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Lucas; Barney, David; Goldfarb, Steven
On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, inmore » the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.« less
ATLAS, CMS and New Challenges for Public Communication
NASA Astrophysics Data System (ADS)
Taylor, Lucas; Barney, David; Goldfarb, Steven
2011-12-01
On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
High-throughput landslide modelling using computational grids
NASA Astrophysics Data System (ADS)
Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.
2012-04-01
Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle physicists and geographical scientists are collaborating to develop methods for providing simple and effective access to landslide models and associated simulation data. Particle physicists have valuable experience in dealing with data complexity and management due to the scale of data generated by particle accelerators such as the Large Hadron Collider (LHC). The LHC generates tens of petabytes of data every year which is stored and analysed using the Worldwide LHC Computing Grid (WLCG). Tools and concepts from the WLCG are being used to drive the development of a Software-as-a-Service (SaaS) platform to provide access to hosted landslide simulation software and data. It contains advanced data management features and allows landslide simulations to be run on the WLCG, dramatically reducing simulation runtimes by parallel execution. The simulations are accessed using a web page through which users can enter and browse input data, submit jobs and visualise results. Replication of the data ensures a local copy can be accessed should a connection to the platform be unavailable. The platform does not know the details of the simulation software it runs, so it is therefore possible to use it to run alternative models at similar scales. This creates the opportunity for activities such as model sensitivity analysis and performance comparison at scales that are impractical using standalone software.
New Tools for Forecasting Old Physics at the LHC
Dixon, Lance
2018-05-21
For the LHC to uncover many types of new physics, the "old physics" produced by the Standard Model must be understood very well. For decades, the central theoretical tool for this job was the Feynman diagram expansion. However, Feynman diagrams are just too slow, even on fast computers, to allow adequate precision for complicated LHC events with many jets in the final state. Such events are already visible in the initial LHC data. Over the past few years, alternative methods to Feynman diagrams have come to fruition. These new "on-shell" methods are based on the old principles of unitarity and factorization. They can be much more efficient because they exploit the underlying simplicity of scattering amplitudes, and recycle lower-loop information. I will describe how and why these methods work, and present some of the recent state-of-the-art results that have been obtained with them.
NASA Astrophysics Data System (ADS)
Chlebana, Frank; CMS Collaboration
2017-11-01
The challenges of the High-Luminosity LHC (HL-LHC) are driven by the large number of overlapping proton-proton collisions (pileup) in each bunch-crossing and the extreme radiation dose to detectors at high pseudorapidity. To overcome this challenge CMS is developing an endcap electromagnetic+hadronic sampling calorimeter employing silicon sensors in the electromagnetic and front hadronic sections, comprising over 6 million channels, and highly-segmented plastic scintillators in the rear part of the hadronic section. This High- Granularity Calorimeter (HGCAL) will be the first of its kind used in a colliding beam experiment. Clustering deposits of energy over many cells and layers is a complex and challenging computational task, particularly in the high-pileup environment of HL-LHC. Baseline detector performance results are presented for electromagnetic and hadronic objects, and studies demonstrating the advantages of fine longitudinal and transverse segmentation are explored.
The CMS Tier0 goes cloud and grid for LHC Run 2
Hufnagel, Dirk
2015-12-23
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less
The CMS TierO goes Cloud and Grid for LHC Run 2
NASA Astrophysics Data System (ADS)
Hufnagel, Dirk
2015-12-01
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.
Elastic extension of a local analysis facility on external clouds for the LHC experiments
NASA Astrophysics Data System (ADS)
Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.
2017-10-01
The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.
NNLO corrections to top pair production at hadron colliders: the quark-gluon reaction
NASA Astrophysics Data System (ADS)
Czakon, Michal; Mitov, Alexander
2013-01-01
We compute the next-to-next-to-leading order QCD correction to the total inclusive top pair production cross-section in the reaction qgto toverline{t}+X . We find moderate {O} (1%) correction to central values at both Tevatron and LHC. The scale variation of the cross-section remains unchanged at the Tevatron and is significantly reduced at the LHC. We find that recently introduced approximation based on the high-energy limit of the top pair cross-section significantly deviates from the exact result. The results derived in the present work are included in version 1.4 of the program Top++. Work towards computing the reaction ggto toverline{t}+X is ongoing.
Deployment and Operational Experiences with CernVM-FS at the GridKa Tier-1 Center
NASA Astrophysics Data System (ADS)
Alef, Manfred; Jäger, Axel; Petzold and, Andreas; Verstege, Bernhard
2012-12-01
In 2012 the GridKa Tier-1 computing center hosts 130 kHS06 computing resources and 14PB disk and 17PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides a solution tailored to the requirement of the LHC VOs. We will focus on the first operational experiences and the monitoring of CernVM-FS on the worker nodes and the squid caches.
J/ψ production and suppression in high-energy proton-nucleus collisions
Ma, Yan -Qing; Venugopalan, Raju; Zhang, Hong -Fei
2015-10-02
In this study, we apply a color glass condensate+nonrelativistic QCD (CGC+NRQCD) framework to compute J/ψ production in deuteron-nucleus collisions at RHIC and proton-nucleus collisions at the LHC. Our results match smoothly at high p⊥ to a next-to-leading order perturbative QCD+NRQCD computation. Excellent agreement is obtained for p⊥ spectra at the RHIC and LHC for central and forward rapidities, as well as for the normalized ratio R pA of these results to spectra in proton-proton collisions. In particular, we observe that the R pA data are strongly bounded by our computations of the same for each of the individual NRQCD channels;more » this result provides strong evidence that our description is robust against uncertainties in initial conditions and hadronization mechanisms.« less
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
Elastic Extension of a CMS Computing Centre Resources on External Clouds
NASA Astrophysics Data System (ADS)
Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.
2016-10-01
After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.
The GridPP DIRAC project - DIRAC for non-LHC communities
NASA Astrophysics Data System (ADS)
Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.
2015-12-01
The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.
The CREAM-CE: First experiences, results and requirements of the four LHC experiments
NASA Astrophysics Data System (ADS)
Mendez Lorenzo, Patricia; Santinelli, Roberto; Sciaba, Andrea; Thackray, Nick; Shiers, Jamie; Renshall, Harry; Sgaravatto, Massimo; Padhi, Sanjay
2010-04-01
In terms of the gLite middleware, the current LCG-CE used by the four LHC experiments is about to be deprecated. The new CREAM-CE service (Computing Resource Execution And Management) has been approved to replace the previous service. CREAM-CE is a lightweight service created to handle job management operations at the CE level. It is able to accept requests both via the gLite WMS service and also via direct submission for transmission to the local batch system. This flexible duality provides the experiments with a large level of freedom to adapt the service to their own computing models, but at the same time it requires a careful follow up of the requirements and tests of the experiments to ensure that their needs are fulfilled before real data taking. In this paper we present the current testing results of the four LHC experiments concerning this new service. The operations procedures, which have been elaborated together with the experiment support teams will be discussed. Finally, the experiments requirements and the expectations for both the sites and the service itself are exposed in detail.
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.
2015-12-01
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
NASA Astrophysics Data System (ADS)
Johnston, William; Ernst, M.; Dart, E.; Tierney, B.
2014-04-01
Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large-scale, widely distributed data analysis systems, and the experience of the LHC can be applied to other scientific disciplines. In particular, specific analogies to the SKA will be cited in the talk.
Web Proxy Auto Discovery for the WLCG
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.
2017-10-01
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.
Web Proxy Auto Discovery for the WLCG
Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...
2017-11-23
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Web Proxy Auto Discovery for the WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.; Blumenfeld, B.
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Building a Prototype of LHC Analysis Oriented Computing Centers
NASA Astrophysics Data System (ADS)
Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.
2012-12-01
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.
High-precision QCD at hadron colliders:electroweak gauge boson rapidity distributions at NNLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasiou, C.
2004-01-05
We compute the rapidity distributions of W and Z bosons produced at the Tevatron and the LHC through next-to-next-to leading order in QCD. Our results demonstrate remarkable stability with respect to variations of the factorization and renormalization scales for all values of rapidity accessible in current and future experiments. These processes are therefore ''gold-plated'': current theoretical knowledge yields QCD predictions accurate to better than one percent. These results strengthen the proposal to use $W$ and $Z$ production to determine parton-parton luminosities and constrain parton distribution functions at the LHC. For example, LHC data should easily be able to distinguish themore » central parton distribution fit obtained by MRST from that obtained by Alekhin.« less
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
Shooting string holography of jet quenching at RHIC and LHC
Ficnar, Andrej; Gubser, Steven S.; Gyulassy, Miklos
2014-10-13
We derive a new formula for jet energy loss using finite endpoint momentum shooting strings initial conditions in SYM plasmas to overcome the difficulties of previous falling string holographic scenarios. We apply the new formula to compute the nuclear modification factor R AA and the elliptic flow parameter v 2 of light hadrons at RHIC and LHC. We show furthermore that Gauss–Bonnet quadratic curvature corrections to the AdS 5 geometry improve the agreement with the recent data.
Shooting string holography of jet quenching at RHIC and LHC
NASA Astrophysics Data System (ADS)
Ficnar, Andrej; Gubser, Steven S.; Gyulassy, Miklos
2014-11-01
We derive a new formula for jet energy loss using finite endpoint momentum shooting strings initial conditions in SYM plasmas to overcome the difficulties of previous falling string holographic scenarios. We apply the new formula to compute the nuclear modification factor RAA and the elliptic flow parameter v2 of light hadrons at RHIC and LHC. We show furthermore that Gauss-Bonnet quadratic curvature corrections to the AdS5 geometry improve the agreement with the recent data.
Towards a centralized Grid Speedometer
NASA Astrophysics Data System (ADS)
Dzhunov, I.; Andreeva, J.; Fajardo, E.; Gutsche, O.; Luyckx, S.; Saiz, P.
2014-06-01
Given the distributed nature of the Worldwide LHC Computing Grid and the way CPU resources are pledged and shared around the globe, Virtual Organizations (VOs) face the challenge of monitoring the use of these resources. For CMS and the operation of centralized workflows, the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard Site Status Board (SSB) provides a very flexible framework to collect, aggregate and visualize data. The CMS production monitoring team uses the SSB to define the metrics that have to be monitored and the alarms that have to be raised. During the integration of CMS production monitoring into the SSB, several enhancements to the core functionality of the SSB were required; They were implemented in a generic way, so that other VOs using the SSB can exploit them. Alongside these enhancements, there were a number of changes to the core of the SSB framework. This paper presents the details of the implementation and the advantages for current and future usage of the new features in SSB.
Dynamical scales for multi-TeV top-pair production at the LHC
NASA Astrophysics Data System (ADS)
Czakon, Michał; Heymes, David; Mitov, Alexander
2017-04-01
We calculate all major differential distributions with stable top-quarks at the LHC. The calculation covers the multi-TeV range that will be explored during LHC Run II and beyond. Our results are in the form of high-quality binned distributions. We offer predictions based on three different parton distribution function (pdf) sets. In the near future we will make our results available also in the more flexible fastNLO format that allows fast re-computation with any other pdf set. In order to be able to extend our calculation into the multi-TeV range we have had to derive a set of dynamic scales. Such scales are selected based on the principle of fastest perturbative convergence applied to the differential and inclusive cross-section. Many observations from our study are likely to be applicable and useful to other precision processes at the LHC. With scale uncertainty now under good control, pdfs arise as the leading source of uncertainty for TeV top production. Based on our findings, true precision in the boosted regime will likely only be possible after new and improved pdf sets appear. We expect that LHC top-quark data will play an important role in this process.
Probing top-Z dipole moments at the LHC and ILC
Röntsch, Raoul; Schulze, Markus
2015-08-11
We investigate the weak electric and magnetic dipole moments of top quark-Z boson interactions at the Large Hadron Collider (LHC) and the International Linear Collider (ILC). Their vanishingly small magnitude in the Standard Model makes these couplings ideal for probing New Physics interactions and for exploring the role of top quarks in electroweak symmetry breaking. In our analysis, we consider the production of two top quarks in association with a Z boson at the LHC, and top quark pairs mediated by neutral gauge bosons at the ILC. These processes yield direct sensitivity to top quark-Z boson interactions and complement indirectmore » constraints from electroweak precision data. Our computation is accurate to next-to-leading order in QCD, we include the full decay chain of top quarks and the Z boson, and account for theoretical uncertainties in our constraints. Furthermore, we find that LHC experiments will soon be able to probe weak dipole moments for the first time.« less
NASA Astrophysics Data System (ADS)
Nellist, C.; Dinu, N.; Gkougkousis, E.; Lounis, A.
2015-06-01
The LHC accelerator complex will be upgraded between 2020-2022, to the High-Luminosity-LHC, to considerably increase statistics for the various physics analyses. To operate under these challenging new conditions, and maintain excellent performance in track reconstruction and vertex location, the ATLAS pixel detector must be substantially upgraded and a full replacement is expected. Processing techniques for novel pixel designs are optimised through characterisation of test structures in a clean room and also through simulations with Technology Computer Aided Design (TCAD). A method to study non-perpendicular tracks through a pixel device is discussed. Comparison of TCAD simulations with Secondary Ion Mass Spectrometry (SIMS) measurements to investigate the doping profile of structures and validate the simulation process is also presented.
GridPP - Preparing for LHC Run 2 and the Wider Context
NASA Astrophysics Data System (ADS)
Coles, Jeremy
2015-12-01
This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.
2012-12-01
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.
Collider Aspects of Flavour Physics at High Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
del Aguila, F.; Aguilar-Saavedra, J.A.; Allanach, B.C.
2008-03-07
This chapter of the report of the 'Flavour in the era of LHC' workshop discusses flavor related issues in the production and decays of heavy states at LHC, both from the experimental side and from the theoretical side. We review top quark physics and discuss flavor aspects of several extensions of the Standard Model, such as supersymmetry, little Higgs model or models with extra dimensions. This includes discovery aspects as well as measurement of several properties of these heavy states. We also present public available computational tools related to this topic.
Higgs bosons with large transverse momentum at the LHC
NASA Astrophysics Data System (ADS)
Kudashkin, Kirill; Lindert, Jonas M.; Melnikov, Kirill; Wever, Christopher
2018-07-01
We compute the next-to-leading order QCD corrections to the production of Higgs bosons with large transverse momentum p⊥ ≫ 2mt at the LHC. To accomplish this, we combine the two-loop amplitudes for processes gg → Hg, qg → Hq and q q bar → Hg, recently computed in the approximation of nearly massless top quarks, with the numerical calculation of the squared one-loop amplitudes for gg → Hgg, qg → Hqg and q q bar → Hgg processes. The latter computation is performed with OpenLoops. We find that the QCD corrections to the Higgs transverse momentum distribution at very high p⊥ are large but quite similar to the QCD corrections obtained for point-like Hgg coupling. Our result removes one of the largest sources of theoretical uncertainty in the description of high-p⊥ Higgs boson production and opens a way to use the high-p⊥ region to search for physics beyond the Standard Model.
QCD corrections to ZZ production in gluon fusion at the LHC
Caola, Fabrizio; Melnikov, Kirill; Rontsch, Raoul; ...
2015-11-23
We compute the next-to-leading-order QCD corrections to the production of two Z-bosons in the annihilation of two gluons at the LHC. Being enhanced by a large gluon flux, these corrections provide a distinct and, potentially, the dominant part of the N 3LO QCD contributions to Z-pair production in proton collisions. The gg → ZZ annihilation is a loop-induced process that receives the dominant contribution from loops of five light quarks, that are included in our computation in the massless approximation. We find that QCD corrections increase the gg → ZZ production cross section by O(50%–100%) depending on the values ofmore » the renormalization and factorization scales used in the leading-order computation and the collider energy. Furthermore, the large corrections to the gg → ZZ channel increase the pp → ZZ cross section by about 6% to 8%, exceeding the estimated theoretical uncertainty of the recent next-to-next-to-leading-order QCD calculation.« less
Top-philic Z ' forces at the LHC
NASA Astrophysics Data System (ADS)
Fox, Patrick J.; Low, Ian; Zhang, Yue
2018-03-01
Despite extensive searches for an additional neutral massive gauge boson at the LHC, a Z ' at the weak scale could still be present if its couplings to the first two generations of quarks are suppressed, in which case the production in hadron colliders relies on tree-level processes in association with heavy flavors or one-loop processes in association with a jet. We consider the low-energy effective theory of a top-philic Z ' and present possible UV completions. We clarify theoretical subtleties in evaluating the production of a top-philic Z ' at the LHC and examine carefully the treatment of ananomalous Z ' current in the low-energy effective theory. Recipes for properly computing the production rate in the Z ' + j channel are given. We discuss constraints from colliders and low-energy probes of new physics. As an application, we apply these considerations to models that use a weak-scale Z ' to explain possible violations of lepton universality in B meson decays, and show that the future running of a high luminosity LHC can potentially cover much of the remaining parameter space favored by this particular interpretation of the B physics anomaly.
Semivisible Jets: Dark Matter Undercover at the LHC.
Cohen, Timothy; Lisanti, Mariangela; Lou, Hou Keong
2015-10-23
Dark matter may be a composite particle that is accessible via a weakly coupled portal. If these hidden-sector states are produced at the Large Hadron Collider (LHC), they would undergo a QCD-like shower. This would result in a spray of stable invisible dark matter along with unstable states that decay back to the standard model. Such "semivisible" jets arise, for example, when their production and decay are driven by a leptophobic Z' resonance; the resulting signature is characterized by significant missing energy aligned along the direction of one of the jets. These events are vetoed by the current suite of searches employed by the LHC, resulting in low acceptance. This Letter will demonstrate that the transverse mass-computed using the final-state jets and the missing energy-provides a powerful discriminator between the signal and the QCD background. Assuming that the Z' couples to the standard model quarks with the same strength as the Z(0), the proposed search can discover (exclude) Z' masses up to 2.5 TeV (3.5 TeV) with 100 fb(-1) of 14 TeV data at the LHC.
The ATLAS PanDA Pilot in Operation
NASA Astrophysics Data System (ADS)
Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration
2011-12-01
The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Towards future circular colliders
NASA Astrophysics Data System (ADS)
Benedikt, Michael; Zimmermann, Frank
2016-09-01
The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) presently provides proton-proton collisions at a center-of-mass (c.m.) energy of 13 TeV. The LHC design was started more than 30 years ago, and its physics program will extend through the second half of the 2030's. The global Future Circular Collider (FCC) study is now preparing for a post-LHC project. The FCC study focuses on the design of a 100-TeV hadron collider (FCC-hh) in a new ˜100 km tunnel. It also includes the design of a high-luminosity electron-positron collider (FCCee) as a potential intermediate step, and a lepton-hadron collider option (FCC-he). The scope of the FCC study comprises accelerators, technology, infrastructure, detectors, physics, concepts for worldwide data services, international governance models, and implementation scenarios. Among the FCC core technologies figure 16-T dipole magnets, based on Nb3 S n superconductor, for the FCC-hh hadron collider, and a highly-efficient superconducting radiofrequency system for the FCC-ee lepton collider. Following the FCC concept, the Institute of High Energy Physics (IHEP) in Beijing has initiated a parallel design study for an e + e - Higgs factory in China (CEPC), which is to be succeeded by a high-energy hadron collider (SPPC). At present a tunnel circumference of 54 km and a hadron collider c.m. energy of about 70 TeV are being considered. After a brief look at the LHC, this article reports the motivation and the present status of the FCC study, some of the primary design challenges and R&D subjects, as well as the emerging global collaboration.
NASA Astrophysics Data System (ADS)
Castro, Andrew; Alice-Usa Collaboration; Alice-Tpc Collaboration
2017-09-01
The Time Projection Chamber (TPC) currently used for ALICE (A Large Ion Collider Experiment at CERN) is a gaseous tracking detector used to study both proton-proton and heavy-ion collisions at the Large Hadron Collider (LHC) In order to accommodate the higher luminosit collisions planned for the LHC Run-3 starting in 2021, the ALICE-TPC will undergo a major upgrade during the next LHC shut down. The TPC is limited to a read out of 1000 Hz in minimum bias events due to the intrinsic dead time associated with back ion flow in the multi wire proportional chambers (MWPC) in the TPC. The TPC upgrade will handle the increase in event readout to 50 kHz for heavy ion minimum bias triggered events expected with the Run-3 luminosity by switching the MWPCs to a stack of four Gaseous Electron Multiplier (GEM) foils. The GEM layers will combine different hole pitches to reduce the dead time while maintaining the current spatial and energy resolution of the existing TPC. Undertaking the upgrade of the TPC represents a massive endeavor in terms of design, production, construction, quality assurance, and installation, thus the upgrade is coordinated over a number of institutes worldwide. The talk will go over the physics motivation for the upgrade, the ALICE-USA contribution to the construction of Inner Read Out Chambers IROCs, and QA from the first chambers built in the U.S
MonALISA, an agent-based monitoring and control system for the LHC experiments
NASA Astrophysics Data System (ADS)
Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.
Development, deployment and operations of ATLAS databases
NASA Astrophysics Data System (ADS)
Vaniachine, A. V.; Schmitt, J. G. v. d.
2008-07-01
In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.
LEMON - LHC Era Monitoring for Large-Scale Infrastructures
NASA Astrophysics Data System (ADS)
Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron
2011-12-01
At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
Klimentov, A.; Buncic, P.; De, K.; ...
2015-05-22
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimentov, A.; Buncic, P.; De, K.
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Top-philic Z' forces at the LHC
Fox, Patrick J.; Low, Ian; Northwestern Univ., Evanston, IL; ...
2018-03-13
Despite extensive searches for an additional neutral massive gauge boson at the LHC, amore » $$Z^\\prime$$ at the weak scale could still be present if its couplings to the first two generations of quarks are suppressed, in which case the production in hadron colliders relies on tree-level processes in association with heavy flavors or one-loop processes in association with a jet. Here, we consider the low-energy effective theory of a top-philic $Z'$ and present possible UV completions. We clarify theoretical subtleties in evaluating the production of a top-philic $Z'$ at the LHC and examine carefully the treatment of an anomalous $Z'$ current in the low-energy effective theory. Recipes for properly computing the production rate in the $Z'+j$ channel are given. We discuss constraints from colliders and low-energy probes of new physics. As an application, we apply these considerations to models that use a weak-scale $Z'$ to explain possible violations of lepton universality in $B$ meson decays, and show that the future running of a high luminosity LHC can potentially cover much of the remaining parameter space favored by this particular interpretation of the $B$ physics anomaly.« less
Setting Up a Grid-CERT: Experiences of an Academic CSIRT
ERIC Educational Resources Information Center
Moller, Klaus
2007-01-01
Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF
NASA Astrophysics Data System (ADS)
Deatrich, D. C.; Liu, S. X.; Tafirout, R.
2010-04-01
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.
HammerCloud: A Stress Testing System for Distributed Analysis
NASA Astrophysics Data System (ADS)
van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo
2011-12-01
Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).
The Legnaro-Padova distributed Tier-2: challenges and results
NASA Astrophysics Data System (ADS)
Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola
2014-06-01
The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.
Incoherent vector mesons production in PbPb ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2017-03-01
The incoherent rapidity distributions of vector mesons are computed in dipole model in PbPb ultraperipheral collisions at the CERN Large Hadron Collider (LHC). The IIM model fitted from newer data is employed in the dipole amplitude. The Boosted Gaussian and Gaus-LC wave functions for vector mesons are implemented in the calculations as well. Predictions for the J / ψ, ψ (2 s), ρ and ϕ incoherent rapidity distributions are evaluated and compared with experimental data and other theoretical predictions in this paper. We obtain closer predictions of the incoherent rapidity distributions for J / ψ than previous calculations in the IIM model.
Exclusive photoproduction of vector mesons in proton-lead ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-02-01
Rapidity distributions of vector mesons are computed in dipole model proton-lead ultraperipheral collisions (UPCs) at the CERN Larger Hadron Collider (LHC). The dipole model framework is implemented in the calculations of cross sections in the photon-hadron interaction. The bCGC model and Boosted Gaussian wave functions are employed in the scattering amplitude. We obtain predictions of rapidity distributions of J / ψ meson proton-lead ultraperipheral collisions. The predictions give a good description to the experimental data of ALICE. The rapidity distributions of ϕ, ω and ψ (2 s) mesons in proton-lead ultraperipheral collisions are also presented in this paper.
Beyond the Large Hadron Collider: A First Look at Cryogenics for CERN Future Circular Colliders
NASA Astrophysics Data System (ADS)
Lebrun, Philippe; Tavian, Laurent
Following the first experimental discoveries at the Large Hadron Collider (LHC) and the recent update of the European strategy in particle physics, CERN has undertaken an international study of possible future circular colliders beyond the LHC. The study, conducted with the collaborative participation of interested institutes world-wide, considers several options for very high energy hadron-hadron, electron-positron and hadron-electron colliders to be installed in a quasi-circular underground tunnel in the Geneva basin, with a circumference of 80 km to 100 km. All these machines would make intensive use of advanced superconducting devices, i.e. high-field bending and focusing magnets and/or accelerating RF cavities, thus requiring large helium cryogenic systems operating at 4.5 K or below. Based on preliminary sets of parameters and layouts for the particle colliders under study, we discuss the main challenges of their cryogenic systems and present first estimates of the cryogenic refrigeration capacities required, with emphasis on the qualitative and quantitative steps to be accomplished with respect to the present state-of-the-art.
NASA Astrophysics Data System (ADS)
Buchalla, G.; Komatsubara, T. K.; Muheim, F.; Silvestrini, L.; Artuso, M.; Asner, D. M.; Ball, P.; Baracchini, E.; Bell, G.; Beneke, M.; Berryhill, J.; Bevan, A.; Bigi, I. I.; Blanke, M.; Bobeth, Ch.; Bona, M.; Borzumati, F.; Browder, T.; Buanes, T.; Buchmüller, O.; Buras, A. J.; Burdin, S.; Cassel, D. G.; Cavanaugh, R.; Ciuchini, M.; Colangelo, P.; Crosetti, G.; Dedes, A.; de Fazio, F.; Descotes-Genon, S.; Dickens, J.; Doležal, Z.; Dürr, S.; Egede, U.; Eggel, C.; Eigen, G.; Fajfer, S.; Feldmann, Th.; Ferrandes, R.; Gambino, P.; Gershon, T.; Gibson, V.; Giorgi, M.; Gligorov, V. V.; Golob, B.; Golutvin, A.; Grossman, Y.; Guadagnoli, D.; Haisch, U.; Hazumi, M.; Heinemeyer, S.; Hiller, G.; Hitlin, D.; Huber, T.; Hurth, T.; Iijima, T.; Ishikawa, A.; Isidori, G.; Jäger, S.; Khodjamirian, A.; Koppenburg, P.; Lagouri, T.; Langenegger, U.; Lazzeroni, C.; Lenz, A.; Lubicz, V.; Lucha, W.; Mahlke, H.; Melikhov, D.; Mescia, F.; Misiak, M.; Nakao, M.; Napolitano, J.; Nikitin, N.; Nierste, U.; Oide, K.; Okada, Y.; Paradisi, P.; Parodi, F.; Patel, M.; Petrov, A. A.; Pham, T. N.; Pierini, M.; Playfer, S.; Polesello, G.; Policicchio, A.; Poschenrieder, A.; Raimondi, P.; Recksiegel, S.; Řezníček, P.; Robert, A.; Rosner, J. L.; Ruggiero, G.; Sarti, A.; Schneider, O.; Schwab, F.; Simula, S.; Sivoklokov, S.; Slavich, P.; Smith, C.; Smizanska, M.; Soni, A.; Speer, T.; Spradlin, P.; Spranger, M.; Starodumov, A.; Stech, B.; Stocchi, A.; Stone, S.; Tarantino, C.; Teubert, F.; T'jampens, S.; Toms, K.; Trabelsi, K.; Trine, S.; Uhlig, S.; Vagnoni, V.; van Hunen, J. J.; Weiglein, G.; Weiler, A.; Wilkinson, G.; Xie, Y.; Yamauchi, M.; Zhu, G.; Zupan, J.; Zwicky, R.
2008-09-01
The present report documents the results of Working Group 2: B, D and K decays, of the workshop on Flavor in the Era of the LHC, held at CERN from November 2005 through March 2007. With the advent of the LHC, we will be able to probe New Physics (NP) up to energy scales almost one order of magnitude larger than it has been possible with present accelerator facilities. While direct detection of new particles will be the main avenue to establish the presence of NP at the LHC, indirect searches will provide precious complementary information, since most probably it will not be possible to measure the full spectrum of new particles and their couplings through direct production. In particular, precision measurements and computations in the realm of flavor physics are expected to play a key role in constraining the unknown parameters of the Lagrangian of any NP model emerging from direct searches at the LHC. The aim of Working Group 2 was twofold: on the one hand, to provide a coherent up-to-date picture of the status of flavor physics before the start of the LHC; on the other hand, to initiate activities on the path towards integrating information on NP from high- p T and flavor data. This report is organized as follows: in Sect. 1, we give an overview of NP models, focusing on a few examples that have been discussed in some detail during the workshop, with a short description of the available computational tools for flavor observables in NP models. Section 2 contains a concise discussion of the main theoretical problem in flavor physics: the evaluation of the relevant hadronic matrix elements for weak decays. Section 3 contains a detailed discussion of NP effects in a set of flavor observables that we identified as “benchmark channels” for NP searches. The experimental prospects for flavor physics at future facilities are discussed in Sect. 4. Finally, Sect. 5 contains some assessments on the work done at the workshop and the prospects for future developments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFSmore » and Frontier.« less
Collider Signal II:. Missing ET Signatures and Dark Matter Connection
NASA Astrophysics Data System (ADS)
Baer, Howard
2010-08-01
These lectures give an overview of aspects of missing ET signatures from new physics at the LHC, along with their important connection to dark matter physics. Mostly, I will concentrate on supersymmetric (SUSY) sources of ɆT, but will also mention Little Higgs models with T-parity (LHT) and universal extra dimensions (UED) models with KK-parity. Lecture 1 covers SUSY basics, model building and spectra computation. Lecture 2 addresses sparticle production and decay mechanisms at hadron colliders and event generation. Lecture 3 covers SUSY signatures at LHC, along with LHT and UED signatures for comparison. In Lecture 4, I address the dark matter connection, and how direct and indirect dark matter searches, along with LHC collider searches, may allow us to both discover and characterize dark matter in the next several years. Finally, the interesting scenario of Yukawa-unified SUSY is examined; this case works best if the dark matter turns out to be a mixture of axion/axino states, rather than neutralinos.
Evolution of the ATLAS PanDA workload management system for exascale computational science
NASA Astrophysics Data System (ADS)
Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration
2014-06-01
An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.
GPU-accelerated track reconstruction in the ALICE High Level Trigger
NASA Astrophysics Data System (ADS)
Rohr, David; Gorbunov, Sergey; Lindenstruth, Volker;
2017-10-01
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online compute farm which reconstructs events measured by the ALICE detector in real-time. The most compute-intensive part is the reconstruction of particle trajectories called tracking and the most important detector for tracking is the Time Projection Chamber (TPC). The HLT uses a GPU-accelerated algorithm for TPC tracking that is based on the Cellular Automaton principle and on the Kalman filter. The GPU tracking has been running in 24/7 operation since 2012 in LHC Run 1 and 2. In order to better leverage the potential of the GPUs, and speed up the overall HLT reconstruction, we plan to bring more reconstruction steps (e.g. the tracking for other detectors) onto the GPUs. There are several tasks running so far on the CPU that could benefit from cooperation with the tracking, which is hardly feasible at the moment due to the delay of the PCI Express transfers. Moving more steps onto the GPU, and processing them on the GPU at once, will reduce PCI Express transfers and free up CPU resources. On top of that, modern GPUs and GPU programming APIs provide new features which are not yet exploited by the TPC tracking. We present our new developments for GPU reconstruction, both with a focus on the online reconstruction on GPU for the online offline computing upgrade in ALICE during LHC Run 3, and also taking into account how the current HLT in Run 2 can profit from these improvements.
A compression algorithm for the combination of PDF sets.
Carrazza, Stefano; Latorre, José I; Rojo, Juan; Watt, Graeme
The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.
Developments in the ATLAS Tracking Software ahead of LHC Run 2
NASA Astrophysics Data System (ADS)
Styles, Nicholas; Bellomo, Massimiliano; Salzburger, Andreas; ATLAS Collaboration
2015-05-01
After a hugely successful first run, the Large Hadron Collider (LHC) is currently in a shut-down period, during which essential maintenance and upgrades are being performed on the accelerator. The ATLAS experiment, one of the four large LHC experiments has also used this period for consolidation and further developments of the detector and of its software framework, ahead of the new challenges that will be brought by the increased centre-of-mass energy and instantaneous luminosity in the next run period. This is of particular relevance for the ATLAS Tracking software, responsible for reconstructing the trajectory of charged particles through the detector, which faces a steep increase in CPU consumption due to the additional combinatorics of the high-multiplicity environment. The steps taken to mitigate this increase and stay within the available computing resources while maintaining the excellent performance of the tracking software in terms of the information provided to the physics analyses will be presented. Particular focus will be given to changes to the Event Data Model, replacement of the maths library, and adoption of a new persistent output format. The resulting CPU profiling results will be discussed, as well as the performance of the algorithms for physics processes under the expected conditions for the next LHC run.
Propagation of heavy baryons in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Das, Santosh K.; Torres-Rincon, Juan M.; Tolos, Laura; Minissale, Vincenzo; Scardina, Francesco; Greco, Vincenzo
2016-12-01
The drag and diffusion coefficients of heavy baryons (Λc and Λb ) in the hadronic phase created in the latter stage of the heavy-ion collisions at RHIC and LHC energies have been evaluated recently. In this work we compute some experimental observables, such as the nuclear suppression factor RA A and the elliptic flow v2 of heavy baryons at RHIC and LHC energies, highlighting the role of the hadronic phase contribution to these observables, which are going to be measured at Run 3 of LHC. For the time evolution of the heavy quarks in the quark and gluon plasma (QGP) and heavy baryons in the hadronic phase, we use the Langevin dynamics. For the hadronization of the heavy quarks to heavy baryons we employ Peterson fragmentation functions. We observe a strong suppression of both the Λc and Λb . We find that the hadronic medium has a sizable impact on the heavy-baryon elliptic flow whereas the impact of hadronic medium rescattering is almost unnoticeable on the nuclear suppression factor. We evaluate the Λc/D ratio at RHIC and LHC. We find that the Λc/D ratio remains unaffected due to the hadronic phase rescattering which enables it as a nobel probe of QGP phase dynamics along with its hadronization.
Belle II grid computing: An overview of the distributed data management system.
NASA Astrophysics Data System (ADS)
Bansal, Vikas; Schram, Malachi; Belle Collaboration, II
2017-01-01
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50/ab of e +e- collision data, about 50 times larger than the data set of the Belle experiment. The computing requirements of Belle II are comparable to those of a Run I LHC experiment. Computing at this scale requires efficient use of the compute grids in North America, Asia and Europe and will take advantage of upgrades to the high-speed global network. We present the architecture of data flow and data handling as a part of the Belle II computing infrastructure.
Higgs Boson Production in Association with a Jet at Next-to-Next-to-Leading Order.
Boughezal, Radja; Caola, Fabrizio; Melnikov, Kirill; Petriello, Frank; Schulze, Markus
2015-08-21
We present precise predictions for Higgs boson production in association with a jet. We work in the Higgs effective field theory framework and compute next-to-next-to-leading order QCD corrections to the gluon-gluon and quark-gluon channels, which is sufficient for reliable LHC phenomenology. We present fully differential results as well as total cross sections for the LHC. Our next-to-next-to-leading order predictions reduce the unphysical scale dependence by more than a factor of 2 and enhance the total rate by about twenty percent compared to next-to-leading order QCD predictions. Our results demonstrate for the first time satisfactory convergence of the perturbative series.
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
Current Grid operation and future role of the Grid
NASA Astrophysics Data System (ADS)
Smirnova, O.
2012-12-01
Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.
Design of the protoDUNE raw data management infrastructure
Fuess, S.; Illingworth, R.; Mengel, M.; ...
2017-10-01
The Deep Underground Neutrino Experiment (DUNE) will employ a set of Liquid Argon Time Projection Chambers (LArTPC) with a total mass of 40 kt as the main components of its Far Detector. In order to validate this technology and characterize the detector performance at full scale, an ambitious experimental program (called “protoDUNE”) has been initiated which includes a test of the large-scale prototypes for the single-phase and dual-phase LArTPC technologies, which will run in a beam at CERN. The total raw data volume that is slated to be collected during the scheduled 3-month beam run is estimated to be inmore » excess of 2.5 PB for each detector. This data volume will require that the protoDUNE experiment carefully design the DAQ, data handling and data quality monitoring systems to be capable of dealing with challenges inherent with peta-scale data management while simultaneously fulfilling the requirements of disseminating the data to a worldwide collaboration and DUNE associated computing sites. Here in this paper, we present our approach to solving these problems by leveraging the design, expertise and components created for the LHC and Intensity Frontier experiments into a unified architecture that is capable of meeting the needs of protoDUNE.« less
Design of the protoDUNE raw data management infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuess, S.; Illingworth, R.; Mengel, M.
The Deep Underground Neutrino Experiment (DUNE) will employ a set of Liquid Argon Time Projection Chambers (LArTPC) with a total mass of 40 kt as the main components of its Far Detector. In order to validate this technology and characterize the detector performance at full scale, an ambitious experimental program (called “protoDUNE”) has been initiated which includes a test of the large-scale prototypes for the single-phase and dual-phase LArTPC technologies, which will run in a beam at CERN. The total raw data volume that is slated to be collected during the scheduled 3-month beam run is estimated to be inmore » excess of 2.5 PB for each detector. This data volume will require that the protoDUNE experiment carefully design the DAQ, data handling and data quality monitoring systems to be capable of dealing with challenges inherent with peta-scale data management while simultaneously fulfilling the requirements of disseminating the data to a worldwide collaboration and DUNE associated computing sites. Here in this paper, we present our approach to solving these problems by leveraging the design, expertise and components created for the LHC and Intensity Frontier experiments into a unified architecture that is capable of meeting the needs of protoDUNE.« less
NASA Astrophysics Data System (ADS)
Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores
2011-12-01
With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.
US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Harvey B; Barczyk, Artur J
2013-04-05
US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, wheremore » US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans.« less
Diphoton production at the LHC: a QCD study up to NNLO
NASA Astrophysics Data System (ADS)
Catani, Stefano; Cieri, Leandro; de Florian, Daniel; Ferrera, Giancarlo; Grazzini, Massimiliano
2018-04-01
We consider the production of prompt-photon pairs at the LHC and we report on a study of QCD radiative corrections up to the next-to-next-to-leading order (NNLO). We present a detailed comparison of next-to-leading order (NLO) results obtained within the standard and smooth cone isolation criteria, by studying the dependence on the isolation parameters. We highlight the role of different partonic subprocesses within the two isolation criteria, and we show that they produce large radiative corrections for both criteria. Smooth cone isolation is a consistent procedure to compute QCD radiative corrections at NLO and beyond. If photon isolation is sufficiently tight, we show that the NLO results for the two isolation procedures are consistent with each other within their perturbative uncertainties. We then extend our study to NNLO by using smooth cone isolation. We discuss the impact of the NNLO corrections and the corresponding perturbative uncertainties for both fiducial cross sections and distributions, and we comment on the comparison with some LHC data. Throughout our study we remark on the main features that are produced by the kinematical selection cuts that are applied to the photons. In particular, we examine soft-gluon singularities that appear in the perturbative computations of the invariant mass distribution of the photon pair, the transverse-momentum spectra of the photons, and the fiducial cross section with asymmetric and symmetric photon transverse-momentum cuts, and we present their behaviour in analytic form.
Security in the CernVM File System and the Frontier Distributed Database Caching System
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.
2014-06-01
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.
How to deal with petabytes of data: the LHC Grid project
NASA Astrophysics Data System (ADS)
Britton, D.; Lloyd, S. L.
2014-06-01
We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
Maltoni, Fabio; Vryonidou, Eleni; Zhang, Cen
2016-10-24
We present the results of the computation of the next-to-leading order QCD corrections to the production cross section of a Higgs boson in association with a top-antitop pair at the LHC, including the three relevant dimension-six operators (O tφ, O φG, O tG) of the standard model effective field theory. These operators also contribute to the production of Higgs bosons in loop-induced processes at the LHC, such as inclusive Higgs, Hj and HH production, and modify the Higgs decay branching ratios for which we also provide predictions. We perform a detailed study of the cross sections and their uncertainties atmore » the total as well as differential level and of the structure of the effective field theory at NLO including renormalisation group effects. Finally, we show how the combination of information coming from measurements of these production processes will allow to constrain the three operators at the current and future LHC runs. Finally, our results lead to a significant improvement of the accuracy and precision of the deviations expected from higher-dimensional operators in the SM in both the top-quark and the Higgs-boson sectors and provide a necessary ingredient for performing a global EFT fit to the LHC data at NLO accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maltoni, Fabio; Vryonidou, Eleni; Zhang, Cen
We present the results of the computation of the next-to-leading order QCD corrections to the production cross section of a Higgs boson in association with a top-antitop pair at the LHC, including the three relevant dimension-six operators (O tφ, O φG, O tG) of the standard model effective field theory. These operators also contribute to the production of Higgs bosons in loop-induced processes at the LHC, such as inclusive Higgs, Hj and HH production, and modify the Higgs decay branching ratios for which we also provide predictions. We perform a detailed study of the cross sections and their uncertainties atmore » the total as well as differential level and of the structure of the effective field theory at NLO including renormalisation group effects. Finally, we show how the combination of information coming from measurements of these production processes will allow to constrain the three operators at the current and future LHC runs. Finally, our results lead to a significant improvement of the accuracy and precision of the deviations expected from higher-dimensional operators in the SM in both the top-quark and the Higgs-boson sectors and provide a necessary ingredient for performing a global EFT fit to the LHC data at NLO accuracy.« less
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Jaiswal, P.; Li, Ye
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
Dawson, S.; Jaiswal, P.; Li, Ye; ...
2016-12-01
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Status and Trends in Networking at LHC Tier1 Facilities
NASA Astrophysics Data System (ADS)
Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.
2012-12-01
The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization
Phenomenology of single-inclusive jet production with jet radius and threshold resummation
NASA Astrophysics Data System (ADS)
Liu, Xiaohui; Moch, Sven-Olaf; Ringer, Felix
2018-03-01
We perform a detailed study of inclusive jet production cross sections at the LHC and compare the QCD theory predictions based on the recently developed formalism for threshold and jet radius joint resummation at next-to-leading logarithmic accuracy to inclusive jet data collected by the CMS Collaboration at √{S }=7 and 13 TeV. We compute the cross sections at next-to-leading order in QCD with and without the joint resummation for different choices of jet radii R and observe that the joint resummation leads to crucial improvements in the description of the data. Comprehensive studies with different parton distribution functions demonstrate the necessity of considering the joint resummation in fits of those functions based on the LHC jet data.
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-05-01
Photoproduction of vector mesons is computed with dipole model in proton-proton ultraperipheral collisions (UPCs) at the CERN Large Hadron Collider (LHC). The dipole model framework is employed in the calculations of vector mesons production in diffractive processes. Parameters of the bCGC model are refitted with the latest inclusive deep inelastic scattering experimental data. Employing the bCGC model and boosted Gaussian light-cone wave function for vector mesons, we obtain the prediction of rapidity distributions of J/ψ and ψ(2s) mesons in proton-proton ultraperipheral collisions at the LHC. The predictions give a good description of the experimental data of LHCb. Predictions of ϕ and ω mesons are also evaluated in this paper.
Oklahoma Center for High Energy Physics (OCHEP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, S; Strauss, M J; Snow, J
2012-02-29
The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less
ERIC Educational Resources Information Center
Rubin, Michael Rogers
1988-01-01
The second of three articles on abusive data collection and usage practices and their effect on personal privacy, discusses the evolution of data protection laws worldwide, and compares the scope, major provisions, and enforcement components of the laws. A chronology of key events in the regulation of computer databanks in included. (1 reference)…
Health and performance monitoring of the online computer cluster of CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, G.; et al.
2012-01-01
The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.
The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments
NASA Astrophysics Data System (ADS)
Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.
2012-12-01
After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.
The new CMS DAQ system for run-2 of the LHC
Bawej, Tomasz; Behrens, Ulf; Branson, James; ...
2015-05-21
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in ordermore » to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of ~ 4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. As a result, this paper presents the requirements, technical choices, and performance of the new system.« less
ATLAS@Home: Harnessing Volunteer Computing for HEP
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration
2015-12-01
A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.
NASA Astrophysics Data System (ADS)
Webster, Jordan
2017-01-01
Dense track environments in pp collisions at the Large Hadron Collider (LHC) motivate the use of triggers with dedicated hardware for fast track reconstruction. The ATLAS Collaboration is in the process of implementing a Fast Tracker (FTK) trigger upgrade, in which Content Addressable Memories (CAMs) will be used to rapidly match hit patterns with large banks of simulated tracks. The FTK CAMs are produced primarily at the University of Pisa. However, commercial CAM technology is rapidly developing due to applications in computer networking devices. This poster presents new studies comparing FTK CAMs to cutting-edge ternary CAMs developed by Cavium. The comparison is intended to guide the design of future track-based trigger systems for the next Phase at the LHC.
Beam loss detection system in the arcs of the LHC
NASA Astrophysics Data System (ADS)
Arauzo, A.; Bovet, C.
2000-11-01
Over the whole circumference of the LHC, Beam Loss Monitors (BLM) will be needed for a continuous surveillance of fast and slow beam losses. In this paper, the location of the BLMs set outside the magnet cryostats in the arcs is proposed. In order to know the number of protons lost on the beam screen, the sensitivity of each BLM has been computed using the program GEANT 3.21, which generates the shower inside the cryostat. The material and the magnetic fields have been described thoroughly in 3-D and the simulation results show the best locations for 6 BLMs needed around each quadrupole. The number of minimum ionizing particles received for each lost proton serves to define local thresholds to dump the beam when the losses are menacing to quench a magnet.
The Laser calibration of the ATLAS Tile Calorimeter during the LHC run 1
Abdallah, J.; Alexa, C.; Coutinho, Y. Amaral; ...
2016-10-12
This article describes the Laser calibration system of the ATLAS hadronic Tile Calorimeter that has been used during the run 1 of the LHC . First, the stability of the system associated readout electronics is studied. It is found to be stable with variations smaller than 0.6 %. Then, the method developed to compute the calibration constants, to correct for the variations of the gain of the calorimeter photomultipliers, is described. These constants were determined with a statistical uncertainty of 0.3 % and a systematic uncertainty of 0.2 % for the central part of the calorimeter and 0.5 % formore » the end-caps. Lastly, the detection and correction of timing mis-configuration of the Tile Calorimeter using the Laser system are also presented.« less
The Laser calibration of the ATLAS Tile Calorimeter during the LHC run 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdallah, J.; Alexa, C.; Coutinho, Y. Amaral
This article describes the Laser calibration system of the ATLAS hadronic Tile Calorimeter that has been used during the run 1 of the LHC . First, the stability of the system associated readout electronics is studied. It is found to be stable with variations smaller than 0.6 %. Then, the method developed to compute the calibration constants, to correct for the variations of the gain of the calorimeter photomultipliers, is described. These constants were determined with a statistical uncertainty of 0.3 % and a systematic uncertainty of 0.2 % for the central part of the calorimeter and 0.5 % formore » the end-caps. Lastly, the detection and correction of timing mis-configuration of the Tile Calorimeter using the Laser system are also presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadolsky, Pavel M.
2015-08-31
The report summarizes research activities of the project ”Integrated analysis of particle interactions” at Southern Methodist University, funded by 2010 DOE Early Career Research Award DE-SC0003870. The goal of the project is to provide state-of-the-art predictions in quantum chromodynamics in order to achieve objectives of the LHC program for studies of electroweak symmetry breaking and new physics searches. We published 19 journal papers focusing on in-depth studies of proton structure and integration of advanced calculations from different areas of particle phenomenology: multi-loop calculations, accurate long-distance hadronic functions, and precise numerical programs. Methods for factorization of QCD cross sections were advancedmore » in order to develop new generations of CTEQ parton distribution functions (PDFs), CT10 and CT14. These distributions provide the core theoretical input for multi-loop perturbative calculations by LHC experimental collaborations. A novel ”PDF meta-analysis” technique was invented to streamline applications of PDFs in numerous LHC simulations and to combine PDFs from various groups using multivariate stochastic sampling of PDF parameters. The meta-analysis will help to bring the LHC perturbative calculations to the new level of accuracy, while reducing computational efforts. The work on parton distributions was complemented by development of advanced perturbative techniques to predict observables dependent on several momentum scales, including production of massive quarks and transverse momentum resummation at the next-to-next-to-leading order in QCD.« less
Jet substructure shedding light on heavy Majorana neutrinos at the LHC
NASA Astrophysics Data System (ADS)
Das, Arindam; Konar, Partha; Thalapillil, Arun
2018-02-01
The existence of tiny neutrino masses and flavor mixings can be explained naturally in various seesaw models, many of which typically having additional Majorana type SM gauge singlet right handed neutrinos ( N). If they are at around the electroweak scale and furnished with sizable mixings with light active neutrinos, they can be produced at high energy colliders, such as the Large Hadron Collider (LHC). A characteristic signature would be same sign lepton pairs, violating lepton number, together with light jets — pp → Nℓ ± , N → ℓ ± W ∓ , W ∓ → jj. We propose a new search strategy utilising jet substructure techniques, observing that for a heavy right handed neutrino mass M N much above M W ±, the two jets coming out of the boosted W ± may be interpreted as a single fat-jet ( J). Hence, the distinguishing signal topology will be ℓ ± ℓ ± J . Performing a comprehensive study of the different signal regions along with complete background analysis, in tandem with detector level simulations, we compute statistical significance limits. We find that heavy neutrinos can be explored effectively for mass ranges 300 GeV ≤ M N ≤ 800 GeV and different light-heavy neutrino mixing | V μN |2. At the 13 TeV LHC with 3000 fb-1 integrated luminosity one can competently explore mixing angles much below present LHC limits, and moreover exceed bounds from electroweak precision data.
Extending the farm on external sites: the INFN Tier-1 experience
NASA Astrophysics Data System (ADS)
Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.
2017-10-01
The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.
Single top quark photoproduction at the LHC
NASA Astrophysics Data System (ADS)
de Favereau de Jeneret, J.; Ovyn, S.
2008-08-01
High-energy photon-proton interactions at the LHC offer interesting possibilities for the study of the electroweak sector up to TeV scale and searches for processes beyond the Standard Model. An analysis of the W associated single top photoproduction has been performed using the adapted MadGraph/MadEvent [F. Maltoni and T. Stelzer, JHEP 0302, (2003) 027; T. Stelzer and W.F. Long, Phys. Commun. 81, (1994) 357-371] and CalcHEP [A. Pukhov, Nucl. Inst. Meth A 502, (2003) 596-598] programs interfaced to the Pythia [T. Sjöstrand et al., Comput. Phys. Commun. 135, (2001) 238] generator and a fast detector simulation program. Event selection and suppression of main backgrounds have been studied. A comparable sensitivity to |V| to those obtained using the standard single top production in pp collisions has been achieved already for 10 fb of integrated luminosity. Photoproduction at the LHC provides also an attractive framework for observation of the anomalous production of single top due to Flavour-Changing Neutral Currents. The sensitivity to anomalous coupling parameters, k and k is presented and indicates that stronger limits can be placed on anomalous couplings after 1 fb.
Difference between two species of emu hides a test for lepton flavour violation
NASA Astrophysics Data System (ADS)
Lester, Christopher G.; Brunt, Benjamin H.
2017-03-01
We argue that an LHC measurement of some simple quantities related to the ratio of rates of e + μ - to e - μ + events is surprisingly sensitive to as-yet unexcluded R-parity violating supersymmetric models with non-zero λ 231 ' couplings. The search relies upon the approximate lepton universality in the Standard Model, the sign of the charge of the proton, and a collection of favourable detector biases. The proposed search is unusual because: it does not require any of the displaced vertices, hadronic neutralino decay products, or squark/gluino production relied upon by existing LHC RPV searches; it could work in cases in which the only light sparticles were smuons and neutralinos; and it could make a discovery (though not necessarily with optimal significance) without requiring the computation of a leading-order Monte Carlo estimate of any background rate. The LHC has shown no strong hints of post-Higgs physics and so precision Standard Model measurements are becoming ever more important. We argue that in this environment growing profits are to be made from searches that place detector biases and symmetries of the Standard Model at their core — searches based around `controls' rather than around signals.
NASA Astrophysics Data System (ADS)
Gross, Kyle; Hayashi, Soichi; Teige, Scott; Quick, Robert
2012-12-01
Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
NASA Astrophysics Data System (ADS)
McKee, Shawn;
2017-10-01
Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage
HL-LHC and HE-LHC Upgrade Plans and Opportunities for US Participation
NASA Astrophysics Data System (ADS)
Apollinari, Giorgio
2017-01-01
The US HEP community has identified the exploitation of physics opportunities at the High Luminosity-LHC (HL-LHC) as the highest near-term priority. Thanks to multi-year R&D programs, US National Laboratories and Universities have taken the leadership in the development of technical solutions to increase the LHC luminosity, enabling the HL-LHC Project and uniquely positioning this country to make critical contributions to the LHC luminosity upgrade. This talk will describe the shaping of the US Program to contribute in the next decade to HL-LHC through newly developed technologies such as Nb3Sn focusing magnets or superconducting crab cavities. The experience gained through the execution of the HL-LHC Project in the US will constitute a pool of knowledge and capabilities allowing further developments in the future. Opportunities for US participations in proposed hadron colliders, such as a possible High Energy-LHC (HE-LHC), will be described as well.
AGIS: Evolution of Distributed Computing information system for ATLAS
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.
2015-12-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
NASA Astrophysics Data System (ADS)
Ablyazimov, T.; Abuhoza, A.; Adak, R. P.; Adamczyk, M.; Agarwal, K.; Aggarwal, M. M.; Ahammed, Z.; Ahmad, F.; Ahmad, N.; Ahmad, S.; Akindinov, A.; Akishin, P.; Akishina, E.; Akishina, T.; Akishina, V.; Akram, A.; Al-Turany, M.; Alekseev, I.; Alexandrov, E.; Alexandrov, I.; Amar-Youcef, S.; Anđelić, M.; Andreeva, O.; Andrei, C.; Andronic, A.; Anisimov, Yu.; Appelshäuser, H.; Argintaru, D.; Atkin, E.; Avdeev, S.; Averbeck, R.; Azmi, M. D.; Baban, V.; Bach, M.; Badura, E.; Bähr, S.; Balog, T.; Balzer, M.; Bao, E.; Baranova, N.; Barczyk, T.; Bartoş, D.; Bashir, S.; Baszczyk, M.; Batenkov, O.; Baublis, V.; Baznat, M.; Becker, J.; Becker, K.-H.; Belogurov, S.; Belyakov, D.; Bendarouach, J.; Berceanu, I.; Bercuci, A.; Berdnikov, A.; Berdnikov, Y.; Berendes, R.; Berezin, G.; Bergmann, C.; Bertini, D.; Bertini, O.; Beşliu, C.; Bezshyyko, O.; Bhaduri, P. P.; Bhasin, A.; Bhati, A. K.; Bhattacharjee, B.; Bhattacharyya, A.; Bhattacharyya, T. K.; Biswas, S.; Blank, T.; Blau, D.; Blinov, V.; Blume, C.; Bocharov, Yu.; Book, J.; Breitner, T.; Brüning, U.; Brzychczyk, J.; Bubak, A.; Büsching, H.; Bus, T.; Butuzov, V.; Bychkov, A.; Byszuk, A.; Cai, Xu; Cãlin, M.; Cao, Ping; Caragheorgheopol, G.; Carević, I.; Cătănescu, V.; Chakrabarti, A.; Chattopadhyay, S.; Chaus, A.; Chen, Hongfang; Chen, LuYao; Cheng, Jianping; Chepurnov, V.; Cherif, H.; Chernogorov, A.; Ciobanu, M. I.; Claus, G.; Constantin, F.; Csanád, M.; D'Ascenzo, N.; Das, Supriya; Das, Susovan; de Cuveland, J.; Debnath, B.; Dementiev, D.; Deng, Wendi; Deng, Zhi; Deppe, H.; Deppner, I.; Derenovskaya, O.; Deveaux, C. A.; Deveaux, M.; Dey, K.; Dey, M.; Dillenseger, P.; Dobyrn, V.; Doering, D.; Dong, Sheng; Dorokhov, A.; Dreschmann, M.; Drozd, A.; Dubey, A. K.; Dubnichka, S.; Dubnichkova, Z.; Dürr, M.; Dutka, L.; Dželalija, M.; Elsha, V. V.; Emschermann, D.; Engel, H.; Eremin, V.; Eşanu, T.; Eschke, J.; Eschweiler, D.; Fan, Huanhuan; Fan, Xingming; Farooq, M.; Fateev, O.; Feng, Shengqin; Figuli, S. P. D.; Filozova, I.; Finogeev, D.; Fischer, P.; Flemming, H.; Förtsch, J.; Frankenfeld, U.; Friese, V.; Friske, E.; Fröhlich, I.; Frühauf, J.; Gajda, J.; Galatyuk, T.; Gangopadhyay, G.; García Chávez, C.; Gebelein, J.; Ghosh, P.; Ghosh, S. K.; Gläßel, S.; Goffe, M.; Golinka-Bezshyyko, L.; Golovatyuk, V.; Golovnya, S.; Golovtsov, V.; Golubeva, M.; Golubkov, D.; Gómez Ramírez, A.; Gorbunov, S.; Gorokhov, S.; Gottschalk, D.; Gryboś, P.; Grzeszczuk, A.; Guber, F.; Gudima, K.; Gumiński, M.; Gupta, A.; Gusakov, Yu.; Han, Dong; Hartmann, H.; He, Shue; Hehner, J.; Heine, N.; Herghelegiu, A.; Herrmann, N.; Heß, B.; Heuser, J. M.; Himmi, A.; Höhne, C.; Holzmann, R.; Hu, Dongdong; Huang, Guangming; Huang, Xinjie; Hutter, D.; Ierusalimov, A.; Ilgenfritz, E.-M.; Irfan, M.; Ivanischev, D.; Ivanov, M.; Ivanov, P.; Ivanov, Valery; Ivanov, Victor; Ivanov, Vladimir; Ivashkin, A.; Jaaskelainen, K.; Jahan, H.; Jain, V.; Jakovlev, V.; Janson, T.; Jiang, Di; Jipa, A.; Kadenko, I.; Kähler, P.; Kämpfer, B.; Kalinin, V.; Kallunkathariyil, J.; Kampert, K.-H.; Kaptur, E.; Karabowicz, R.; Karavichev, O.; Karavicheva, T.; Karmanov, D.; Karnaukhov, V.; Karpechev, E.; Kasiński, K.; Kasprowicz, G.; Kaur, M.; Kazantsev, A.; Kebschull, U.; Kekelidze, G.; Khan, M. M.; Khan, S. A.; Khanzadeev, A.; Khasanov, F.; Khvorostukhin, A.; Kirakosyan, V.; Kirejczyk, M.; Kiryakov, A.; Kiš, M.; Kisel, I.; Kisel, P.; Kiselev, S.; Kiss, T.; Klaus, P.; Kłeczek, R.; Klein-Bösing, Ch.; Kleipa, V.; Klochkov, V.; Kmon, P.; Koch, K.; Kochenda, L.; Koczoń, P.; Koenig, W.; Kohn, M.; Kolb, B. W.; Kolosova, A.; Komkov, B.; Korolev, M.; Korolko, I.; Kotte, R.; Kovalchuk, A.; Kowalski, S.; Koziel, M.; Kozlov, G.; Kozlov, V.; Kramarenko, V.; Kravtsov, P.; Krebs, E.; Kreidl, C.; Kres, I.; Kresan, D.; Kretschmar, G.; Krieger, M.; Kryanev, A. V.; Kryshen, E.; Kuc, M.; Kucewicz, W.; Kucher, V.; Kudin, L.; Kugler, A.; Kumar, Ajit; Kumar, Ashwini; Kumar, L.; Kunkel, J.; Kurepin, A.; Kurepin, N.; Kurilkin, A.; Kurilkin, P.; Kushpil, V.; Kuznetsov, S.; Kyva, V.; Ladygin, V.; Lara, C.; Larionov, P.; Laso García, A.; Lavrik, E.; Lazanu, I.; Lebedev, A.; Lebedev, S.; Lebedeva, E.; Lehnert, J.; Lehrbach, J.; Leifels, Y.; Lemke, F.; Li, Cheng; Li, Qiyan; Li, Xin; Li, Yuanjing; Lindenstruth, V.; Linnik, B.; Liu, Feng; Lobanov, I.; Lobanova, E.; Löchner, S.; Loizeau, P.-A.; Lone, S. A.; Lucio Martínez, J. A.; Luo, Xiaofeng; Lymanets, A.; Lyu, Pengfei; Maevskaya, A.; Mahajan, S.; Mahapatra, D. P.; Mahmoud, T.; Maj, P.; Majka, Z.; Malakhov, A.; Malankin, E.; Malkevich, D.; Malyatina, O.; Malygina, H.; Mandal, M. M.; Mandal, S.; Manko, V.; Manz, S.; Marin Garcia, A. M.; Markert, J.; Masciocchi, S.; Matulewicz, T.; Meder, L.; Merkin, M.; Mialkovski, V.; Michel, J.; Miftakhov, N.; Mik, L.; Mikhailov, K.; Mikhaylov, V.; Milanović, B.; Militsija, V.; Miskowiec, D.; Momot, I.; Morhardt, T.; Morozov, S.; Müller, W. F. J.; Müntz, C.; Mukherjee, S.; Muñoz Castillo, C. E.; Murin, Yu.; Najman, R.; Nandi, C.; Nandy, E.; Naumann, L.; Nayak, T.; Nedosekin, A.; Negi, V. S.; Niebur, W.; Nikulin, V.; Normanov, D.; Oancea, A.; Oh, Kunsu; Onishchuk, Yu.; Ososkov, G.; Otfinowski, P.; Ovcharenko, E.; Pal, S.; Panasenko, I.; Panda, N. R.; Parzhitskiy, S.; Patel, V.; Pauly, C.; Penschuck, M.; Peshekhonov, D.; Peshekhonov, V.; Petráček, V.; Petri, M.; Petriş, M.; Petrovici, A.; Petrovici, M.; Petrovskiy, A.; Petukhov, O.; Pfeifer, D.; Piasecki, K.; Pieper, J.; Pietraszko, J.; Płaneta, R.; Plotnikov, V.; Plujko, V.; Pluta, J.; Pop, A.; Pospisil, V.; Poźniak, K.; Prakash, A.; Prasad, S. K.; Prokudin, M.; Pshenichnov, I.; Pugach, M.; Pugatch, V.; Querchfeld, S.; Rabtsun, S.; Radulescu, L.; Raha, S.; Rami, F.; Raniwala, R.; Raniwala, S.; Raportirenko, A.; Rautenberg, J.; Rauza, J.; Ray, R.; Razin, S.; Reichelt, P.; Reinecke, S.; Reinefeld, A.; Reshetin, A.; Ristea, C.; Ristea, O.; Rodriguez Rodriguez, A.; Roether, F.; Romaniuk, R.; Rost, A.; Rostchin, E.; Rostovtseva, I.; Roy, Amitava; Roy, Ankhi; Rożynek, J.; Ryabov, Yu.; Sadovsky, A.; Sahoo, R.; Sahu, P. K.; Sahu, S. K.; Saini, J.; Samanta, S.; Sambyal, S. S.; Samsonov, V.; Sánchez Rosado, J.; Sander, O.; Sarangi, S.; Satława, T.; Sau, S.; Saveliev, V.; Schatral, S.; Schiaua, C.; Schintke, F.; Schmidt, C. J.; Schmidt, H. R.; Schmidt, K.; Scholten, J.; Schweda, K.; Seck, F.; Seddiki, S.; Selyuzhenkov, I.; Semennikov, A.; Senger, A.; Senger, P.; Shabanov, A.; Shabunov, A.; Shao, Ming; Sheremetiev, A. D.; Shi, Shusu; Shumeiko, N.; Shumikhin, V.; Sibiryak, I.; Sikora, B.; Simakov, A.; Simon, C.; Simons, C.; Singaraju, R. N.; Singh, A. K.; Singh, B. K.; Singh, C. P.; Singhal, V.; Singla, M.; Sitzmann, P.; Siwek-Wilczyńska, K.; Škoda, L.; Skwira-Chalot, I.; Som, I.; Song, Guofeng; Song, Jihye; Sosin, Z.; Soyk, D.; Staszel, P.; Strikhanov, M.; Strohauer, S.; Stroth, J.; Sturm, C.; Sultanov, R.; Sun, Yongjie; Svirida, D.; Svoboda, O.; Szabó, A.; Szczygieł, R.; Talukdar, R.; Tang, Zebo; Tanha, M.; Tarasiuk, J.; Tarassenkova, O.; Târzilă, M.-G.; Teklishyn, M.; Tischler, T.; Tlustý, P.; Tölyhi, T.; Toia, A.; Topil'skaya, N.; Träger, M.; Tripathy, S.; Tsakov, I.; Tsyupa, Yu.; Turowiecki, A.; Tuturas, N. G.; Uhlig, F.; Usenko, E.; Valin, I.; Varga, D.; Vassiliev, I.; Vasylyev, O.; Verbitskaya, E.; Verhoeven, W.; Veshikov, A.; Visinka, R.; Viyogi, Y. P.; Volkov, S.; Volochniuk, A.; Vorobiev, A.; Voronin, Aleksey; Voronin, Alexander; Vovchenko, V.; Vznuzdaev, M.; Wang, Dong; Wang, Xi-Wei; Wang, Yaping; Wang, Yi; Weber, M.; Wendisch, C.; Wessels, J. P.; Wiebusch, M.; Wiechula, J.; Wielanek, D.; Wieloch, A.; Wilms, A.; Winckler, N.; Winter, M.; Wiśniewski, K.; Wolf, Gy.; Won, Sanguk; Wu, Ke-Jun; Wüstenfeld, J.; Xiang, Changzhou; Xu, Nu; Yang, Junfeng; Yang, Rongxing; Yin, Zhongbao; Yoo, In-Kwon; Yuldashev, B.; Yushmanov, I.; Zabołotny, W.; Zaitsev, Yu.; Zamiatin, N. I.; Zanevsky, Yu.; Zhalov, M.; Zhang, Yifei; Zhang, Yu; Zhao, Lei; Zheng, Jiajun; Zheng, Sheng; Zhou, Daicui; Zhou, Jing; Zhu, Xianglei; Zinchenko, A.; Zipper, W.; Żoładź, M.; Zrelov, P.; Zryuev, V.; Zumbruch, P.; Zyzak, M.
2017-03-01
Substantial experimental and theoretical efforts worldwide are devoted to explore the phase diagram of strongly interacting matter. At LHC and top RHIC energies, QCD matter is studied at very high temperatures and nearly vanishing net-baryon densities. There is evidence that a Quark-Gluon-Plasma (QGP) was created at experiments at RHIC and LHC. The transition from the QGP back to the hadron gas is found to be a smooth cross over. For larger net-baryon densities and lower temperatures, it is expected that the QCD phase diagram exhibits a rich structure, such as a first-order phase transition between hadronic and partonic matter which terminates in a critical point, or exotic phases like quarkyonic matter. The discovery of these landmarks would be a breakthrough in our understanding of the strong interaction and is therefore in the focus of various high-energy heavy-ion research programs. The Compressed Baryonic Matter (CBM) experiment at FAIR will play a unique role in the exploration of the QCD phase diagram in the region of high net-baryon densities, because it is designed to run at unprecedented interaction rates. High-rate operation is the key prerequisite for high-precision measurements of multi-differential observables and of rare diagnostic probes which are sensitive to the dense phase of the nuclear fireball. The goal of the CBM experiment at SIS100 (√{s_{NN}}= 2.7-4.9 GeV) is to discover fundamental properties of QCD matter: the phase structure at large baryon-chemical potentials ( μ_B > 500 MeV), effects of chiral symmetry, and the equation of state at high density as it is expected to occur in the core of neutron stars. In this article, we review the motivation for and the physics programme of CBM, including activities before the start of data taking in 2024, in the context of the worldwide efforts to explore high-density QCD matter.
Commissioning of a CERN Production and Analysis Facility Based on xrootd
NASA Astrophysics Data System (ADS)
Campana, Simone; van der Ster, Daniel C.; Di Girolamo, Alessandro; Peters, Andreas J.; Duellmann, Dirk; Coelho Dos Santos, Miguel; Iven, Jan; Bell, Tim
2011-12-01
The CERN facility hosts the Tier-0 of the four LHC experiments, but as part of WLCG it also offers a platform for production activities and user analysis. The CERN CASTOR storage technology has been extensively tested and utilized for LHC data recording and exporting to external sites according to experiments computing model. On the other hand, to accommodate Grid data processing activities and, more importantly, chaotic user analysis, it was realized that additional functionality was needed including a different throttling mechanism for file access. This paper will describe the xroot-based CERN production and analysis facility for the ATLAS experiment and in particular the experiment use case and data access scenario, the xrootd redirector setup on top of the CASTOR storage system, the commissioning of the system and real life experience for data processing and data analysis.
Electroweak Sudakov Corrections to New Physics Searches at the LHC
NASA Astrophysics Data System (ADS)
Chiesa, Mauro; Montagna, Guido; Barzè, Luca; Moretti, Mauro; Nicrosini, Oreste; Piccinini, Fulvio; Tramontano, Francesco
2013-09-01
We compute the one-loop electroweak Sudakov corrections to the production process Z(νν¯)+n jets, with n=1, 2, 3, in pp collisions at the LHC. It represents the main irreducible background to new physics searches at the energy frontier. The results are obtained at the leading and next-to-leading logarithmic accuracy by implementing the general algorithm of Denner and Pozzorini in the event generator for multiparton processes alpgen. For the standard selection cuts used by the ATLAS and CMS Collaborations, we show that the Sudakov corrections to the relevant observables can grow up to -40% at s=14TeV. We also include the contribution due to undetected real radiation of massive gauge bosons, to show to what extent the partial cancellation with the large negative virtual corrections takes place in realistic event selections.
Les Houches 2017: Physics at TeV Colliders New Physics Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooijmans, G.; et al.
We present the activities of the `New Physics' working group for the `Physics at TeV Colliders' workshop (Les Houches, France, 5--23 June, 2017). Our report includes new physics studies connected with the Higgs boson and its properties, direct search strategies, reinterpretation of the LHC results in the building of viable models and new computational tool developments.
Managing the CMS Data and Monte Carlo Processing during LHC Run 2
NASA Astrophysics Data System (ADS)
Wissing, C.;
2017-10-01
In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.
The OSG Open Facility: an on-ramp for opportunistic scientific computing
NASA Astrophysics Data System (ADS)
Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.
2017-10-01
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.
The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayatilaka, B.; Levshina, T.; Sehgal, C.
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less
Final Report: High Energy Physics at the Energy Frontier at Louisiana Tech
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawyer, Lee; Wobisch, Markus; Greenwood, Zeno D.
The Louisiana Tech University High Energy Physics group has developed a research program aimed at experimentally testing the Standard Model of particle physics and searching for new phenomena through a focused set of analyses in collaboration with the ATLAS experiment at the Large Hadron Collider (LHC) at the CERN laboratory in Geneva. This research program includes involvement in the current operation and maintenance of the ATLAS experiment and full involvement in Phase 1 and Phase 2 upgrades in preparation for future high luminosity (HL-LHC) operation of the LHC. Our focus is solely on the ATLAS experiment at the LHC, withmore » some related detector development and software efforts. We have established important service roles on ATLAS in five major areas: Triggers, especially jet triggers; Data Quality monitoring; grid computing; GPU applications for upgrades; and radiation testing for upgrades. Our physics research is focused on multijet measurements and top quark physics in final states containing tau leptons, which we propose to extend into related searches for new phenomena. Focusing on closely related topics in the jet and top analyses and coordinating these analyses in our group has led to high efficiency and increased visibility inside the ATLAS collaboration and beyond. Based on our work in the DØ experiment in Run II of the Fermilab Tevatron Collider, Louisiana Tech has developed a reputation as one of the leading institutions pursuing jet physics studies. Currently we are applying this expertise to the ATLAS experiment, with several multijet analyses in progress.« less
Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando H.; Jones, Robert; Kucharczyk, Katarzyna; Medrano Llamas, Ramón; van der Ster, Daniel
2014-06-01
The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula - the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.
HEP Outreach, Inreach, and Web 2.0
NASA Astrophysics Data System (ADS)
Goldfarb, Steven
2011-12-01
I report on current usage of multimedia and social networking "Web 2.0" tools for Education and Outreach in high-energy physics, and discuss their potential for internal communication within large worldwide collaborations, such as those of the LHC. Following a brief description of the history of Web 2.0 development, I present a survey of the most popular sites and describe their usage in HEP to disseminate information to students and the general public. I then discuss the potential of certain specific tools, such as document and multimedia sharing sites, for boosting the speed and effectiveness of information exchange within the collaborations. I conclude with a brief discussion of the successes and failures of these tools, and make suggestions for improved usage in the future.
Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abat, E.; Abbott, B.
2011-11-28
The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. Inmore » this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of the expression 'CSC studies' ('computing system commissioning'), which is occasionally referred to in these volumes. The work reported does generally assume that the detector is fully operational, and in this sense represents an idealised detector: establishing the best performance of the ATLAS detector with LHC proton-proton collisions is a challenging task for the future. The results summarised here therefore represent the best estimate of ATLAS capabilities before real operational experience of the full detector with beam. Unless otherwise stated, simulations also do not include the effect of additional interactions in the same or other bunch-crossings, and the effect of neutron background is neglected. Thus simulations correspond to the low-luminosity performance of the ATLAS detector. This report is broadly divided into two parts: firstly the performance for identification of physics objects is examined in detail, followed by a detailed assessment of the performance of the trigger system. This part is subdivided into chapters surveying the capabilities for charged particle tracking, each of electron/photon, muon and tau identification, jet and missing transverse energy reconstruction, b-tagging algorithms and performance, and finally the trigger system performance. In each chapter of the report, there is a further subdivision into shorter notes describing different aspects studied. The second major subdivision of the report addresses physics measurement capabilities, and new physics search sensitivities. Individual chapters in this part discuss ATLAS physics capabilities in Standard Model QCD and electroweak processes, in the top quark sector, in b-physics, in searches for Higgs bosons, supersymmetry searches, and finally searches for other new particles predicted in more exotic models.« less
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
Higgs boson production at hadron colliders at N3LO in QCD
NASA Astrophysics Data System (ADS)
Mistlberger, Bernhard
2018-05-01
We present the Higgs boson production cross section at Hadron colliders in the gluon fusion production mode through N3LO in perturbative QCD. Specifically, we work in an effective theory where the top quark is assumed to be infinitely heavy and all other quarks are considered to be massless. Our result is the first exact formula for a partonic hadron collider cross section at N3LO in perturbative QCD. Furthermore, our result is an analytic computation of a hadron collider cross section involving elliptic integrals. We derive numerical predictions for the Higgs boson cross section at the LHC. Previously this result was approximated by an expansion of the cross section around the production threshold of the Higgs boson and we compare our findings. Finally, we study the impact of our new result on the state of the art prediction for the Higgs boson cross section at the LHC.
A New Event Builder for CMS Run II
NASA Astrophysics Data System (ADS)
Albertsson, K.; Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
A new event builder for CMS Run II
Albertsson, K.; Andre, J-M; Andronidis, A.; ...
2015-12-23
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Innibandmore » FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. In conclusion, ee present performance measurements from small-scale prototypes and from the full-scale production system.« less
Bottomonium suppression using a lattice QCD vetted potential
NASA Astrophysics Data System (ADS)
Krouppa, Brandon; Rothkopf, Alexander; Strickland, Michael
2018-01-01
We estimate bottomonium yields in relativistic heavy-ion collisions using a lattice QCD vetted, complex-valued, heavy-quark potential embedded in a realistic, hydrodynamically evolving medium background. We find that the lattice-vetted functional form and temperature dependence of the proper heavy-quark potential dramatically reduces the dependence of the yields on parameters other than the temperature evolution, strengthening the picture of bottomonium as QGP thermometer. Our results also show improved agreement between computed yields and experimental data produced in RHIC 200 GeV /nucleon collisions. For LHC 2.76 TeV /nucleon collisions, the excited states, whose suppression has been used as a vital sign for quark-gluon-plasma production in a heavy-ion collision, are reproduced better than previous perturbatively-motivated potential models; however, at the highest LHC energies our estimates for bottomonium suppression begin to underestimate the data. Possible paths to remedy this situation are discussed.
A PCIe Gen3 based readout for the LHCb upgrade
NASA Astrophysics Data System (ADS)
Bellato, M.; Collazuol, G.; D'Antone, I.; Durante, P.; Galli, D.; Jost, B.; Lax, I.; Liu, G.; Marconi, U.; Neufeld, N.; Schwemmer, R.; Vagnoni, V.
2014-06-01
The architecture of the data acquisition system foreseen for the LHCb upgrade, to be installed by 2018, is devised to readout events trigger-less, synchronously with the LHC bunch crossing rate at 40 MHz. Within this approach the readout boards act as a bridge between the front-end electronics and the High Level Trigger (HLT) computing farm. The baseline design for the LHCb readout is an ATCA board requiring dedicated crates. A local area standard network protocol is implemented in the on-board FPGAs to read out the data. The alternative solution proposed here consists in building the readout boards as PCIe peripherals of the event-builder servers. The main architectural advantage is that protocol and link-technology of the event-builder can be left open until very late, to profit from the most cost-effective industry technology available at the time of the LHC LS2.
Darr, Sylvia C.; Arntzen, Charles J.
1986-01-01
Conditions were developed to isolate the light-harvesting chlorophyll-protein complex serving photosystem II (LHC-II) using a dialyzable detergent, octylpolyoxyethylene. This LHC-II was successfully reconstituted into partially developed chloroplast thylakoids of Hordeum vulgare var Morex (barley) seedlings which were deficient in LHC-II. Functional association of LHC-II with the photosystem II (PSII) core complex was measured by two independent functional assays of PSII sensitization by LHC-II. A 3-fold excess of reconstituted LHC-II was required to equal the activity of LHC developing in vivo. We suggest that a linker component may be absent in the partially developed membranes which is required for specific association of the PSII core complex and LHC-II. Images Fig. 1 PMID:16664744
Advanced Operating System Technologies
NASA Astrophysics Data System (ADS)
Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro
In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.
Factorization and resummation for groomed multi-prong jet shapes
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2018-02-01
Observables which distinguish boosted topologies from QCD jets are playing an increasingly important role at the Large Hadron Collider (LHC). These observables are often used in conjunction with jet grooming algorithms, which reduce contamination from both theoretical and experimental sources. In this paper we derive factorization formulae for groomed multi-prong substructure observables, focusing in particular on the groomed D 2 observable, which is used to identify boosted hadronic decays of electroweak bosons at the LHC. Our factorization formulae allow systematically improvable calculations of the perturbative D 2 distribution and the resummation of logarithmically enhanced terms in all regions of phase space using renormalization group evolution. They include a novel factorization for the production of a soft subjet in the presence of a grooming algorithm, in which clustering effects enter directly into the hard matching. We use these factorization formulae to draw robust conclusions of experimental relevance regarding the universality of the D 2 distribution in both e + e - and pp collisions. In particular, we show that the only process dependence is carried by the relative quark vs. gluon jet fraction in the sample, no non-global logarithms from event-wide correlations are present in the distribution, hadronization corrections are controlled by the perturbative mass of the jet, and all global color correlations are completely removed by grooming, making groomed D 2 a theoretically clean QCD observable even in the LHC environment. We compute all ingredients to one-loop accuracy, and present numerical results at next-to-leading logarithmic accuracy for e + e - collisions, comparing with parton shower Monte Carlo simulations. Results for pp collisions, as relevant for phenomenology at the LHC, are presented in a companion paper [1].
The operation of the LHC accelerator complex (2/2)
Redaelli, Stefano
2018-05-23
These lectures will give an overview of what happens when the LHC is in running mode. They are aimed at students working on the LHC experiments, but all those who are curious about what happens behind the scenes of the LHC are welcomed. You will learn all you always wanted to know about the LHC, and never had the courage to ask! The only pre-requisite is a basic, college-level, knowledge of EM and of the principles that allow to steer charged beams. Topics covered will include, among others: - the description of the injector chain, from the generation of the protons, to the delivery of bunches to the LHC. - the discussion of the steps required to accelerate the beams in the LHC, to bring them into collision, and to control the luminosity at the interaction points. - the description of the monitoring tools available to the LHC operators, and an explanation of the various plots and panels that can be found on the LHC web pages
The operation of the LHC accelerator complex (1/2)
Redaelli, Stefano
2018-05-23
These lectures will give an overview of what happens when the LHC is in running mode. They are aimed at students working on the LHC experiments, but all those who are curious about what happens behind the scenes of the LHC are welcomed. You will learn all you always wanted to know about the LHC, and never had the courage to ask! The only pre-requisite is a basic, college-level, knowledge of EM and of the principles that allow to steer charged beams. Topics covered will include, among others: - the description of the injector chain, from the generation of the protons, to the delivery of bunches to the LHC. - the discussion of the steps required to accelerate the beams in the LHC, to bring them into collision, and to control the luminosity at the interaction points. - the description of the monitoring tools available to the LHC operators, and an explanation of the various plots and panels that can be found on the LHC web pages.
Michael Ernst
2017-12-09
As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,
A New Understanding of the Heat Treatment of Nb-Sn Superconducting Wires
NASA Astrophysics Data System (ADS)
Sanabria, Charlie
Enhancing the beam energy of particle accelerators like the Large Hadron Collider (LHC), at CERN, can increase our probability of finding new fundamental particles of matter beyond those predicted by the standard model. Such discoveries could improve our understanding of the birth of universe, the universe itself, and/or many other mysteries of matter--that have been unresolved for decades--such as dark matter and dark energy. This is obviously a very exciting field of research, and therefore a worldwide collaboration (of universities, laboratories, and the industry) is attempting to increase the beam energy in the LHC. One of the most challenging requirements for an energy increase is the production of a magnetic field homogeneous enough and strong enough to bend the high energy particle beam to keep it inside the accelerating ring. In the current LHC design, these beam bending magnets are made of Nb Ti superconductors, reaching peak fields of 8 T. However, in order to move to higher fields, future magnets will have to use different and more advanced superconducting materials. Among the most viable superconductor wire technologies for future particle accelerator magnets is Nb3Sn, a technology that has been used in high field magnets for many decades. However, Nb3Sn magnet fabrication has an important challenge: the fact the wire fabrication and the coil assembly itself must be done using ductile metallic components (Nb, Sn, and Cu) before the superconducting compound (Nb3 Sn) is activated inside the wires through a heat treatment. The studies presented in this thesis work have found that the heat treatment schedule used on the most advanced Nb3Sn wire technology (the Restacked Rod Process wires, RRPRTM) can still undergo significant improvements. These improvements have already led to an increase of the figure of merit of these wires (critical current density) by 28%.
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
ATLAS and LHC computing on CRAY
NASA Astrophysics Data System (ADS)
Sciacca, F. G.; Haug, S.; ATLAS Collaboration
2017-10-01
Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.
A programming framework for data streaming on the Xeon Phi
NASA Astrophysics Data System (ADS)
Chapeland, S.;
2017-10-01
ALICE (A Large Ion Collider Experiment) is the dedicated heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shut-down of the LHC, the ALICE detector will be upgraded to cope with an interaction rate of 50 kHz in Pb-Pb collisions, producing in the online computing system (O2) a sustained throughput of 3.4 TB/s. This data will be processed on the fly so that the stream to permanent storage does not exceed 90 GB/s peak, the raw data being discarded. In the context of assessing different computing platforms for the O2 system, we have developed a framework for the Intel Xeon Phi processors (MIC). It provides the components to build a processing pipeline streaming the data from the PC memory to a pool of permanent threads running on the MIC, and back to the host after processing. It is based on explicit offloading mechanisms (data transfer, asynchronous tasks) and basic building blocks (FIFOs, memory pools, C++11 threads). The user only needs to implement the processing method to be run on the MIC. We present in this paper the architecture, implementation, and performance of this system.
Precision searches in dijets at the HL-LHC and HE-LHC
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; Childers, J. T.; Proudfoot, J.; Wang, R.; Frizzell, D.
2018-05-01
This paper explores the physics reach of the High-Luminosity Large Hadron Collider (HL-LHC) for searches of new particles decaying to two jets. We discuss inclusive searches in dijets and b-jets, as well as searches in semi-inclusive events by requiring an additional lepton that increases sensitivity to different aspects of the underlying processes. We discuss the expected exclusion limits for generic models predicting new massive particles that result in resonant structures in the dijet mass. Prospects of the Higher-Energy LHC (HE-LHC) collider are also discussed. The study is based on the Pythia8 Monte Carlo generator using representative event statistics for the HL-LHC and HE-LHC running conditions. The event samples were created using supercomputers at NERSC.
Consolidation of cloud computing in ATLAS
NASA Astrophysics Data System (ADS)
Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration
2017-10-01
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
Enabling opportunistic resources for CMS Computing Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hufnagel, Dirk
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Enabling opportunistic resources for CMS Computing Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hufnagel, Dick
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Enabling opportunistic resources for CMS Computing Operations
Hufnagel, Dirk
2015-12-23
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Left-right symmetry and the charged Higgs bosons at the LHC
NASA Astrophysics Data System (ADS)
Bambhaniya, G.; Chakrabortty, J.; Gluza, J.; Kordiaczynska, M.; Szafron, R.
2014-05-01
The charged Higgs boson sector of the Minimal Manifest Left-Right Symmetric model (MLRSM) is investigated in the context of LHC discovery search for new physics beyond Standard Model. We discuss and summarise the main processes within MLRSM where heavy charged Higgs bosons can be produced at the LHC. We explore the scenarios where the amplified signals due to relatively light charged scalars dominate against heavy neutral Z 2 and charged gauge W 2 as well as heavy neutral Higgs bosons signals which are dumped due to large vacuum expectation value v R of the right-handed scalar triplet. Consistency with FCNC effects implies masses of two neutral Higgs bosons to be at least of 10 TeV order, which in turn implies that in MLRSM only three of four charged Higgs bosons, namely and ,and can be simultaneously light. In particular, production processes with one and two doubly charged Higgs bosons are considered. We further incorporate the decays of those scalars leading to multi lepton signals at the LHC. Branching ratios for heavy neutrino N R , W 2 and Z 2 decay into charged Higgs bosons are calculated. These effects are substantial enough and cannot be neglected. The tri- and four-lepton final states for different benchmark points are analysed. Kinematic cuts are chosen in order to strength the leptonic signals and decrease the Standard Model (SM) background. The results are presented using di-lepton invariant mass and lepton-lepton separation distributions for the same sign (SSDL) and opposite sign (OSDL) di-leptons as well as the charge asymmetry are also discussed. We have found that for considered MLRSM processes tri-lepton and four-lepton signals are most important for their detection when compared to the SM background. Both of the signals can be detected at 14 TeV collisions at the LHC with integrated luminosity at the level of 300 fb-1 with doubly charged Higgs bosons up to approximately 600 GeV. Finally, possible extra contribution of the charged MLRSM scalar particles to the measured Higgs to di-photon ( → γγ) decay is computed and pointed out.
Simplified phenomenology for colored dark sectors
NASA Astrophysics Data System (ADS)
El Hedri, Sonia; Kaminska, Anna; de Vries, Maikel; Zurita, Jose
2017-04-01
We perform a general study of the relic density and LHC constraints on simplified models where the dark matter coannihilates with a strongly interacting particle X. In these models, the dark matter depletion is driven by the self-annihilation of X to pairs of quarks and gluons through the strong interaction. The phenomenology of these scenarios therefore only depends on the dark matter mass and the mass splitting between dark matter and X as well as the quantum numbers of X. In this paper, we consider simplified models where X can be either a scalar, a fermion or a vector, as well as a color triplet, sextet or octet. We compute the dark matter relic density constraints taking into account Sommerfeld corrections and bound state formation. Furthermore, we examine the restrictions from thermal equilibrium, the lifetime of X and the current and future LHC bounds on X pair production. All constraints are comprehensively presented in the mass splitting versus dark matter mass plane. While the relic density constraints can lead to upper bounds on the dark matter mass ranging from 2 TeV to more than 10 TeV across our models, the prospective LHC bounds range from 800 to 1500 GeV. A full coverage of the strongly coannihilating dark matter parameter space would therefore require hadron colliders with significantly higher center-of-mass energies.
Production of heavy Higgs bosons and decay into top quarks at the LHC
NASA Astrophysics Data System (ADS)
Bernreuther, W.; Galler, P.; Mellein, C.; Si, Z.-G.; Uwer, P.
2016-02-01
We investigate the production of heavy, neutral Higgs boson resonances and their decays to top-quark top-antiquark (t t ¯) pairs at the Large Hadron Collider (LHC) at next-to-leading order (NLO) in the strong coupling of quantum chromodynamics (QCD). The NLO corrections to heavy Higgs boson production and the Higgs-QCD interference are calculated in the large mt limit with an effective K-factor rescaling. The nonresonant t t ¯ background is taken into account at NLO QCD including weak-interaction corrections. In order to consistently determine the total decay widths of the heavy Higgs bosons, we consider for definiteness the type-II two-Higgs-doublet extension of the standard model and choose three parameter scenarios that entail two heavy neutral Higgs bosons with masses above the t t ¯ threshold and unsuppressed Yukawa couplings to top quarks. For these three scenarios we compute, for the LHC operating at 13 TeV, the t t ¯ cross section and the distributions of the t t ¯ invariant mass, of the transverse top-quark momentum and rapidity, and of the cosine of the Collins-Soper angle with and without the two heavy Higgs resonances. For selected Mt t ¯ bins we estimate the significances for detecting a heavy Higgs signal in the t t ¯ dileptonic and lepton plus jets decay channels.
Long-lived particle searches in R-parity violating MSSM
NASA Astrophysics Data System (ADS)
Zwane, Nosiphiwo
2017-10-01
In this paper we study the constraints on MSSM R-Parity violating decays when the lightest superpartner (LSP) is moderately long lived. In this scenario the LSP vertex displacement may be observed at the LHC. We compute limits on the RPV Yukawa couplings for which the vertex displacement signature maybe used. We then use ATLAS and CMS displaced vertex, meta-stable and prompt decay searches to rule out a region of sparticle masses.
Physics perspectives with AFTER@LHC (A Fixed Target ExpeRiment at LHC)
NASA Astrophysics Data System (ADS)
Massacrier, L.; Anselmino, M.; Arnaldi, R.; Brodsky, S. J.; Chambert, V.; Da Silva, C.; Didelez, J. P.; Echevarria, M. G.; Ferreiro, E. G.; Fleuret, F.; Gao, Y.; Genolini, B.; Hadjidakis, C.; Hřivnáčová, I.; Kikola, D.; Klein, A.; Kurepin, A.; Kusina, A.; Lansberg, J. P.; Lorcé, C.; Lyonnet, F.; Martinez, G.; Nass, A.; Pisano, C.; Robbe, P.; Schienbein, I.; Schlegel, M.; Scomparin, E.; Seixas, J.; Shao, H. S.; Signori, A.; Steffens, E.; Szymanowski, L.; Topilskaya, N.; Trzeciak, B.; Uggerhøj, U. I.; Uras, A.; Ulrich, R.; Wagner, J.; Yamanaka, N.; Yang, Z.
2018-02-01
AFTER@LHC is an ambitious fixed-target project in order to address open questions in the domain of proton and neutron spins, Quark Gluon Plasma and high-x physics, at the highest energy ever reached in the fixed-target mode. Indeed, thanks to the highly energetic 7 TeV proton and 2.76 A.TeV lead LHC beams, center-of-mass energies as large as = 115 GeV in pp/pA and = 72 GeV in AA can be reached, corresponding to an uncharted energy domain between SPS and RHIC. We report two main ways of performing fixed-target collisions at the LHC, both allowing for the usage of one of the existing LHC experiments. In these proceedings, after discussing the projected luminosities considered for one year of data taking at the LHC, we will present a selection of projections for light and heavy-flavour production.
An artificial retina processor for track reconstruction at the LHC crossing rate
Bedeschi, F.; Cenci, R.; Marino, P.; ...
2017-11-23
The goal of the INFN-RETINA R&D project is to develop and implement a computational methodology that allows to reconstruct events with a large number (> 100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full bunch-crossing frequency. Our approach relies on a parallel pattern-recognition algorithm, dubbed artificial retina, inspired by the early stages of image processing by the brain. In order to demonstrate that a track-processing system based on this algorithm is feasible, we built a sizable prototype of a tracking processor tuned to 3 000more » patterns, based on already existing readout boards equipped with Altera Stratix III FPGAs. The detailed geometry and charged-particle activity of a large tracking detector currently in operation are used to assess its performances. Here, we report on the test results with such a prototype.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobyshev, A.; DeMar, P.; Grigaliunas, V.
The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Ourmore » analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization« less
Superconducting Magnets for Accelerators
NASA Astrophysics Data System (ADS)
Brianti, G.; Tortschanoff, T.
1993-03-01
This chapter describes the main features of superconducting magnets for high energy synchrotrons and colliders. It refers to magnets presently used and under development for the most advanced accelerators projects, both recently constructed or in the preparatory phase. These magnets, using the technology mainly based on the NbTi conductor, are described from the aspect of design, materials, construction and performance. The trend toward higher performance can be gauged from the doubling of design field in less than a decade from about 4 T for the Tevatron to 10 T for the LHC. Special properties of the superconducting accelerator magnets, such as their general layout and the need of extensive computational treatment, the limits of performance inherent to the available conductors, the requirements on the structural design are described. The contribution is completed by elaborating on persistent current effects, quench protection and the cryostat design. As examples the main magnets for HERA and SSC, as well as the twin-aperture magnets for LHC, are presented.
Complete NLO corrections to W+W+ scattering and its irreducible background at the LHC
NASA Astrophysics Data System (ADS)
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-10-01
The process pp → μ +ν μ e+νejj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ +ν μ e+νejj final state should be preferably measured together.
An artificial retina processor for track reconstruction at the LHC crossing rate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedeschi, F.; Cenci, R.; Marino, P.
The goal of the INFN-RETINA R&D project is to develop and implement a computational methodology that allows to reconstruct events with a large number (> 100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full bunch-crossing frequency. Our approach relies on a parallel pattern-recognition algorithm, dubbed artificial retina, inspired by the early stages of image processing by the brain. In order to demonstrate that a track-processing system based on this algorithm is feasible, we built a sizable prototype of a tracking processor tuned to 3 000more » patterns, based on already existing readout boards equipped with Altera Stratix III FPGAs. The detailed geometry and charged-particle activity of a large tracking detector currently in operation are used to assess its performances. Here, we report on the test results with such a prototype.« less
The Nature of Computer Assisted Learning.
ERIC Educational Resources Information Center
Whiting, John
Computer assisted learning (CAL) is an old technology which has generated much new interest. Computers can: reduce data to a directly comprehensible form; reduce administration; communicate worldwide and exchange, store, and retrieve data; and teach. The computer's limitation is in its dependence on the user's ability and perceptive nature.…
Is There a Microcomputer in Your Future? ComputerTown Thinks The Answer Is "Yes."
ERIC Educational Resources Information Center
Harvie, Barbara; Anton, Julie
1983-01-01
The services of ComputerTown, a nonprofit computer literacy project of the People's Computer Company in Menlo Park, California with 150 worldwide affiliates, are enumerated including getting started, funding sources, selecting hardware, software selection, support materials, administrative details, special offerings (classes, events), and common…
Issues Using the Life History Calendar in Disability Research
Scott, Tiffany N.; Harrison, Tracie
2011-01-01
Background Overall, there is a dearth of research reporting mixed-method data collection procedures using the LHC within disability research. Objective This report provides practical knowledge on use of the life history calendar (LHC) from the perspective of a mixed-method life history study of mobility impairment situated within a qualitative paradigm. Methods In this paper the method related literature referring to the LHC was reviewed along with its epistemological underpinnings. Further, the uses of the LHC in disability research were illustrated using preliminary data from reports of disablement in Mexican American and Non-Hispanic White women with permanent mobility impairment. Results From our perspective, the LHC was most useful when approached from an interpretive paradigm when gathering data from women of varied ethnic and socioeconomic strata. While we found the LHC the most useful tool currently available for studying disablement over the life course, there were challenges associated with its use. The LHC required extensive interviewer training. In addition, large segments of time were needed for completion depending on the type of participant responses. Conclusions Researchers planning to conduct a disability study may find our experience using the LHC valuable for anticipating issues that may arise when the LHC is used in mixed-method research. PMID:22014674
NASA Astrophysics Data System (ADS)
Zhang, Zhicai
2018-04-01
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high-resolution electron and photon energy measurements. Following the excellent performance achieved during LHC Run I at center-of-mass energies of 7 and 8 TeV, the CMS electromagnetic calorimeter (ECAL) is operating at the LHC with proton-proton collisions at 13 TeV center-of-mass energy. The instantaneous luminosity delivered by the LHC during Run II has achieved unprecedented levels. The average number of concurrent proton-proton collisions per bunch-crossing (pileup) has reached up to 40 interactions in 2016 and may increase further in 2017. These high pileup levels necessitate a retuning of the ECAL readout and trigger thresholds and reconstruction algorithms. In addition, the energy response of the detector must be precisely calibrated and monitored. We present new reconstruction algorithms and calibration strategies that were implemented to maintain the excellent performance of the CMS ECAL throughout Run II. We will show performance results from the 2015-2016 data taking periods and provide an outlook on the expected Run II performance in the years to come. Beyond the LHC, challenging running conditions for CMS are expected after the High-Luminosity upgrade of the LHC (HL-LHC) . We review the design and R&D studies for the CMS ECAL and present first test beam studies. Particular challenges at HL-LHC are the harsh radiation environment, the increasing data rates, and the extreme level of pile-up events, with up to 200 simultaneous proton-proton collisions. We present test beam results of hadron irradiated PbWO crystals up to fluences expected at the HL-LHC . We also report on the R&D for the new readout and trigger electronics, which must be upgraded due to the increased trigger and latency requirements at the HL-LHC.
2010-01-01
Background The extended light-harvesting complex (LHC) protein superfamily is a centerpiece of eukaryotic photosynthesis, comprising the LHC family and several families involved in photoprotection, like the LHC-like and the photosystem II subunit S (PSBS). The evolution of this complex superfamily has long remained elusive, partially due to previously missing families. Results In this study we present a meticulous search for LHC-like sequences in public genome and expressed sequence tag databases covering twelve representative photosynthetic eukaryotes from the three primary lineages of plants (Plantae): glaucophytes, red algae and green plants (Viridiplantae). By introducing a coherent classification of the different protein families based on both, hidden Markov model analyses and structural predictions, numerous new LHC-like sequences were identified and several new families were described, including the red lineage chlorophyll a/b-binding-like protein (RedCAP) family from red algae and diatoms. The test of alternative topologies of sequences of the highly conserved chlorophyll-binding core structure of LHC and PSBS proteins significantly supports the independent origins of LHC and PSBS families via two unrelated internal gene duplication events. This result was confirmed by the application of cluster likelihood mapping. Conclusions The independent evolution of LHC and PSBS families is supported by strong phylogenetic evidence. In addition, a possible origin of LHC and PSBS families from different homologous members of the stress-enhanced protein subfamily, a diverse and anciently paralogous group of two-helix proteins, seems likely. The new hypothesis for the evolution of the extended LHC protein superfamily proposed here is in agreement with the character evolution analysis that incorporates the distribution of families and subfamilies across taxonomic lineages. Intriguingly, stress-enhanced proteins, which are universally found in the genomes of green plants, red algae, glaucophytes and in diatoms with complex plastids, could represent an important and previously missing link in the evolution of the extended LHC protein superfamily. PMID:20673336
Turning the LHC ring into a new physics search machine
NASA Astrophysics Data System (ADS)
Orava, Risto
2017-03-01
The LHC Collider Ring is proposed to be turned into an ultimate automatic search engine for new physics in four consecutive phases: (1) Searches for heavy particles produced in Central Exclusive Process (CEP): pp → p + X + p based on the existing Beam Loss Monitoring (BLM) system of the LHC; (2) Feasibility study of using the LHC Ring as a gravitation wave antenna; (3) Extensions to the current BLM system to facilitate precise registration of the selected CEP proton exit points from the LHC beam vacuum chamber; (4) Integration of the BLM based event tagging system together with the trigger/data acquisition systems of the LHC experiments to facilitate an on-line automatic search machine for the physics of tomorrow.
How to keep the Grid full and working with ATLAS production and physics jobs
NASA Astrophysics Data System (ADS)
Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration
2017-10-01
The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
Implementation of an object oriented track reconstruction model into multiple LHC experiments*
NASA Astrophysics Data System (ADS)
Gaines, Irwin; Gonzalez, Saul; Qian, Sijin
2001-10-01
An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.
Analysis of CERN computing infrastructure and monitoring data
NASA Astrophysics Data System (ADS)
Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.
2015-12-01
Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.
A experimental research program on chirality at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markert, Christina
Heavy-ion collisions provide a unique opportunity to investigate the fundamental laws of physics of the strong force. The extreme conditions created by the collisions within a finite volume are akin to the properties of the deconfined partonic state which existed very shortly after the Big Bang and just prior to visible matter formation in the Universe. In this state massless quarks and gluons (partons) are ``quasi free" particles, the so-called Quark Gluon Plasma (QGP). By following the expansion and cooling of this state, we will map out the process of nucleonic matter formation, which occurs during the phase transition. Themore » fundamental properties of this early partonic phase of matter are not well understood, but they are essential for confirming QCD (Quantum Chromo-Dynamics) and the Standard Model. The specific topic, chiral symmetry restoration, has been called ``the remaining puzzle of QCD.'' This puzzle can only be studied in the dense partonic medium generated in heavy-ion collisions. The research objectives of this proposal are the development and application of new analysis strategies to study chirality and the properties of the medium above the QGP phase transition using hadronic resonances detected with the ALICE experiment at the Large Hadron Collider (LHC) at the CERN research laboratory in Switzerland. This grant funded a new effort at the University of Texas at Austin (UT Austin) to investigate the Quark Gluon Plasma (QGP) at the highest possible energy of 2.76 TeV per nucleon at the Large Hadron Collider (LHC) at CERN via the ALICE experiment. The findings added to our knowledge of the dynamical evolution and the properties of the hot, dense matter produced in heavy-ion collisions, and provided a deeper understanding of multi-hadron interactions in these extreme nuclear matter systems. Our group contributed as well to the hardware and software for the ALICE USA-funded Calorimeter Detector (EMCal). The LHC research program and its connection to fundamental questions in high energy, nuclear and astrophysics has triggered the imagination of many young students worldwide. The studies also promoted the early involvement of students and young postdocs in a large, multi-national research effort abroad, which provided them with substantial experience and skills prior to choosing their career path. The undergraduate program, in conjunction with the Freshman Research Initiative at UT Austin, allowed the students to complete a research project within the field of Nuclear Physics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermilab
2017-09-01
Scientists, engineers and programmers at Fermilab are tackling today’s most challenging computational problems. Their solutions, motivated by the needs of worldwide research in particle physics and accelerators, help America stay at the forefront of innovation.
AEC Experiment Establishes Computer Link Between California and Paris
demonstrated that a terminal in Paris could search a computer in California and display the resulting (Copies) AEC EXPERIMENT ESTABLISHES COMPUTER LINK BETWEEN CALIFORNIA AND PARIS The feasibility of a worldwide information retrieval system which would tie a computer base of information to terminals on the
NASA Astrophysics Data System (ADS)
Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel
2015-12-01
We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.
CMS Distributed Computing Integration in the LHC sustained operations era
NASA Astrophysics Data System (ADS)
Grandi, C.; Bockelman, B.; Bonacorsi, D.; Fisk, I.; González Caballero, I.; Farina, F.; Hernández, J. M.; Padhi, S.; Sarkar, S.; Sciabà, A.; Sfiligoi, I.; Spiga, F.; Úbeda García, M.; Van Der Ster, D. C.; Zvada, M.
2011-12-01
After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
LHC: The Emptiest Space in the Solar System
ERIC Educational Resources Information Center
Cid-Vidal, Xabier; Cid, Ramon
2011-01-01
Proton beams have been colliding at 7 TeV in the Large Hadron Collider (LHC) since 30 March 2010, meaning that the LHC research programme is underway. Particle physicists around the world are looking forward to using the data from these collisions, as the LHC is running at an energy three and a half times higher than previously achieved at any…
Induced activation studies for the LHC upgrade to High Luminosity LHC
NASA Astrophysics Data System (ADS)
Adorisio, C.; Roesler, S.
2018-06-01
The Large Hadron Collider (LHC) will be upgraded in 2019/2020 to increase its luminosity (rate of collisions) by a factor of five beyond its design value and the integrated luminosity by a factor ten, in order to maintain scientific progress and exploit its full capacity. The novel machine configuration, called High Luminosity LHC (HL-LHC), will increase consequently the level of activation of its components. The evaluation of the radiological impact of the HL-LHC operation in the Long Straight Sections of the Insertion Region 1 (ATLAS) and Insertion Region 5 (CMS) is presented. Using the Monte Carlo code FLUKA, ambient dose equivalent rate estimations have been performed on the basis of two announced operating scenarios and using the latest available machine layout. The HL-LHC project requires new technical infrastructure with caverns and 300 m long tunnels along the Insertion Regions 1 and 5. The new underground service galleries will be accessible during the operation of the accelerator machine. The radiological risk assessment for the Civil Engineering work foreseen to start excavating the new galleries in the next LHC Long Shutdown and the radiological impact of the machine operation will be discussed.
NASA Astrophysics Data System (ADS)
Belyaev, Alexander; Cacciapaglia, Giacomo; Ivanov, Igor P.; Rojas-Abatte, Felipe; Thomas, Marc
2018-02-01
The inert two-Higgs-doublet model (i2HDM) is a theoretically well-motivated example of a minimal consistent dark matter (DM) model which provides monojet, mono-Z , mono-Higgs, and vector-boson-fusion +ETmiss signatures at the LHC, complemented by signals in direct and indirect DM search experiments. In this paper we have performed a detailed analysis of the constraints in the full five-dimensional parameter space of the i2HDM, coming from perturbativity, unitarity, electroweak precision data, Higgs data from the LHC, DM relic density, direct/indirect DM detection, and LHC monojet analysis, as well as implications of experimental LHC studies on disappearing charged tracks relevant to a high DM mass region. We demonstrate the complementarity of the above constraints and present projections for future LHC data and direct DM detection experiments to probe further i2HDM parameter space. The model is implemented into the CalcHEP and micrOMEGAs packages, which are publicly available at the HEPMDB database, and it is ready for a further exploration in the context of the LHC, relic density, and DM direct detection.
Run II of the LHC: The Accelerator Science
NASA Astrophysics Data System (ADS)
Redaelli, Stefano
2015-04-01
In 2015 the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) starts its Run II operation. After the successful Run I at 3.5 TeV and 4 TeV in the 2010-2013 period, a first long shutdown (LS1) was mainly dedicated to the consolidation of the LHC magnet interconnections, to allow the LHC to operate at its design beam energy of 7 TeV. Other key accelerator systems have also been improved to optimize the performance reach at higher beam energies. After a review of the LS1 activities, the status of the LHC start-up progress is reported, addressing in particular the status of the LHC hardware commissioning and of the training campaign of superconducting magnets that will determine the operation beam energy in 2015. Then, the plans for the Run II operation are reviewed in detail, covering choice of initial machine parameters and strategy to improve the Run II performance. Future prospects of the LHC and its upgrade plans are also presented.
The High Luminosity LHC Project
NASA Astrophysics Data System (ADS)
Rossi, Lucio
The High Luminosity LHC is one of the major scientific project of the next decade. It aims at increasing the luminosity reach of LHC by a factor five for peak luminosity and a factor ten in integrated luminosity. The project, now fully approved and funded, will be finished in ten years and will prolong the life of LHC until 2035-2040. It implies deep modifications of the LHC for about 1.2 km around the high luminosity insertions of ATLAS and CMS and relies on new cutting edge technologies. We are developing new advanced superconducting magnets capable of reaching 12 T field; superconducting RF crab cavities capable to rotate the beams with great accuracy; 100 kA and hundred meter long superconducting links for removing the power converter out of the tunnel; new collimator concepts, etc... Beside the important physics goals, the High Luminosity LHC project is an ideal test bed for new technologies for the next hadron collider for the post-LHC era.
ERIC Educational Resources Information Center
Evans, C. D.
This paper describes the experiences of the industrial research laboratory of Kodak Ltd. in finding and providing a computer terminal most suited to its very varied requirements. These requirements include bibliographic and scientific data searching and access to a number of worldwide computing services for scientific computing work. The provision…
Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger
NASA Astrophysics Data System (ADS)
Conde Muíño, P.; ATLAS Collaboration
2017-10-01
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.
Using tevatron magnets for HE-LHC or new ring in LHC tunnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piekarz, Henryk; /Fermilab
Two injector accelerator options for HE-LHC of p{sup +} - p{sup +} collisions at 33 TeV cms energy are briefly outlined. One option is based on the Super-SPS (S-SPS) accelerator in the SPS tunnel, and the other one is based on the LER (Low-Energy-Ring) accelerator in the LHC tunnel. Expectations of performance of the main arc accelerator magnets considered for the construction of the S-SPS and of the LER accelerators are used to tentatively devise some selected properties of these accelerators as potential injectors to HE-LHC.
The development of diamond tracking detectors for the LHC
NASA Astrophysics Data System (ADS)
Adam, W.; Berdermann, E.; Bergonzo, P.; de Boer, W.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Doroshenko, J.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fischer, P.; Fizzotti, F.; Furetta, C.; Gan, K. K.; Ghodbane, N.; Grigoriev, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kaplon, J.; Karl, C.; Kass, R.; Keil, M.; Knöpfle, K. T.; Koeth, T.; Krammer, M.; Logiudice, A.; Lu, R.; mac Lynne, L.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Menichelli, D.; Meuser, S.; Mishina, M.; Moroni, L.; Noomen, J.; Oh, A.; Perera, L.; Pernegger, H.; Pernicka, M.; Polesello, P.; Potenza, R.; Riester, J. L.; Roe, S.; Rudge, A.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Sutera, C.; Trischuk, W.; Tromson, D.; Tuve, C.; Vincenzo, B.; Weilhammer, P.; Wermes, N.; Wetstein, M.; Zeuner, W.; Zoeller, M.; RD42 Collaboration
2003-11-01
Chemical vapor deposition diamond has been discussed extensively as an alternate sensor material for use very close to the interaction region of the LHC where extreme radiation conditions exist. During the last few years diamond devices have been manufactured and tested with LHC electronics with the goal of creating a detector usable by all LHC experiment. Extensive progress on diamond quality, on the development of diamond trackers and on radiation hardness studies has been made. Transforming the technology to the LHC specific requirements is now underway. In this paper we present the recent progress achieved.
Evolution of CMS workload management towards multicore job support
NASA Astrophysics Data System (ADS)
Pérez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.; Letts, J.; Majewski, K.; Rodrigues, A. M.; McCrea, A.; Vaandering, E.
2015-12-01
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.
Evolution of CMS Workload Management Towards Multicore Job Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single andmore » multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Edmond L.; Giddings, Steven B.; Wang, Haichen
2014-10-10
Here, the LHC phenomenology of a low-scale gauged flavor symmetry model with inverted hierarchy is studied, through introduction of a simplified model of broken flavor symmetry. A new scalar (a flavon) and a new neutral top-philic massive gauge boson emerge with mass in the TeV range, along with a new heavy fermion associated with the standard model top quark. After checking constraints from electroweak precision observables, we investigate the influence of the model on Higgs boson physics, notably on its production cross section and decay branching fractions. Limits on the flavon φ from heavy Higgs boson searches at the LHCmore » at 7 and 8 TeV are presented. The branching fractions of the flavon are computed as a function of the flavon mass and the Higgs-flavon mixing angle. We also explore possible discovery of the flavon at 14 TeV, particularly via the φ → Z 0Z 0 decay channel in the 2ℓ2ℓ' final state, and through standard model Higgs boson pair production φ → hh in the b¯bγγ final state. We conclude that the flavon mass range up to 500 GeV could be probed down to quite small values of the Higgs-flavon mixing angle with 100 fb –1 of integrated luminosity at 14 TeV.« less
ERIC Educational Resources Information Center
Cohen, Moshe; Miyake, Naomi
A worldwide international computer network, called the Intercultural Learning Network, has been developed to provide students from different cultures with opportunities to work cooperatively. Prototype activities have been developed and tested which facilitate and contextualize interactions among secondary and college students. Joint projects in…
Communication Vulnerability in the Digital Age: A Missed Concern in Constructivism
ERIC Educational Resources Information Center
Katada, Fusa
2016-01-01
The current wave of globalization aided by ubiquitous computing necessarily involves interaction and integration among people and human institutions worldwide. This has led to a worldwide awareness that professionals in academia need to have effective communication skills. Such communication-driven academic discourse puts much demand on language…
ERIC Educational Resources Information Center
Snapp, Robert R.; Neumann, Maureen D.
2015-01-01
The rapid growth of digital technology, including the worldwide adoption of mobile and embedded computers, places new demands on K-grade 12 educators and their students. Young people should have an opportunity to learn the technical knowledge of computer science (e.g., computer programming, mathematical logic, and discrete mathematics) in order to…
PDF4LHC recommendations for LHC Run II
Butterworth, Jon; Carrazza, Stefano; Cooper-Sarkar, Amanda; ...
2016-01-06
We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+αs uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. Lastly, we finally discuss tools which allow for themore » delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.« less
Muons in the CMS High Level Trigger System
NASA Astrophysics Data System (ADS)
Verwilligen, Piet; CMS Collaboration
2016-04-01
The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.
Introducing the LHC in the classroom: an overview of education resources available
NASA Astrophysics Data System (ADS)
Wiener, Gerfried J.; Woithe, Julia; Brown, Alexander; Jende, Konrad
2016-05-01
In the context of the recent re-start of CERN’s Large Hadron Collider (LHC) and the challenge presented by unidentified falling objects (UFOs), we seek to facilitate the introduction of high energy physics in the classroom. Therefore, this paper provides an overview of the LHC and its operation, highlighting existing education resources, and linking principal components of the LHC to topics in physics curricula.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cembranos, Jose A. R.; Diaz-Cruz, J. Lorenzo; Prado, Lilian
Dark Matter direct detection experiments are able to exclude interesting parameter space regions of particle models which predict an important amount of thermal relics. We use recent data to constrain the branon model and to compute the region that is favored by CDMS measurements. Within this work, we also update present colliders constraints with new studies coming from the LHC. Despite the present low luminosity, it is remarkable that for heavy branons, CMS and ATLAS measurements are already more constraining than previous analyses performed with TEVATRON and LEP data.
Physics Goals and Experimental Challenges of the Proton-Proton High-Luminosity Operation of the LHC
NASA Astrophysics Data System (ADS)
Campana, P.; Klute, M.; Wells, P. S.
2016-10-01
The completion of Run 1 of the Large Hadron Collider (LHC) at CERN has seen the discovery of the Higgs boson and an unprecedented number of precise measurements of the Standard Model, and Run 2 has begun to provide the first data at higher energy. The high-luminosity upgrade of the LHC (HL-LHC) and the four experiments (ATLAS, CMS, ALICE, and LHCb) will exploit the full potential of the collider to discover and explore new physics beyond the Standard Model. We review the experimental challenges and the physics opportunities in proton-proton collisions at the HL-LHC.
Abort Gap Cleaning for LHC Run 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uythoven, Jan; Boccardi, Andrea; Bravin, Enrico
2014-07-01
To minimize the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to themore » applied cleaning algorithms.« less
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
NASA Astrophysics Data System (ADS)
Adam, W.; Bergauer, T.; Brondolin, E.; Dragicevic, M.; Friedl, M.; Frühwirth, R.; Hoch, M.; Hrubec, J.; König, A.; Steininger, H.; Waltenberger, W.; Alderweireldt, S.; Beaumont, W.; Janssen, X.; Lauwers, J.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Beghin, D.; Brun, H.; Clerbaux, B.; Delannoy, H.; De Lentdecker, G.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, Th.; Léonard, A.; Luetic, J.; Postiau, N.; Seva, T.; Vanlaer, P.; Vannerom, D.; Wang, Q.; Zhang, F.; Abu Zeid, S.; Blekman, F.; De Bruyn, I.; De Clercq, J.; D'Hondt, J.; Deroover, K.; Lowette, S.; Moortgat, S.; Moreels, L.; Python, Q.; Skovpen, K.; Van Mulders, P.; Van Parijs, I.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; Delaere, C.; Delcourt, M.; De Visscher, S.; Francois, B.; Giammanco, A.; Jafari, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Michotte, D.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Szilasi, N.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Härkönen, J.; Lampén, T.; Luukka, P.; Peltola, T.; Tuominen, E.; Tuovinen, E.; Eerola, P.; Tuuva, T.; Baulieu, G.; Boudoul, G.; Caponetto, L.; Combaret, C.; Contardo, D.; Dupasquier, T.; Gallbit, G.; Lumb, N.; Mirabito, L.; Perries, S.; Vander Donckt, M.; Viret, S.; Agram, J.-L.; Andrea, J.; Bloch, D.; Bonnin, C.; Brom, J.-M.; Chabert, E.; Chanon, N.; Charles, L.; Conte, E.; Fontaine, J.-Ch.; Gross, L.; Hosselet, J.; Jansova, M.; Tromson, D.; Autermann, C.; Feld, L.; Karpinski, W.; Kiesel, K. M.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Pierschel, G.; Preuten, M.; Rauch, M.; Schael, S.; Schomakers, C.; Schulz, J.; Schwering, G.; Wlochal, M.; Zhukov, V.; Pistone, C.; Fluegge, G.; Kuensken, A.; Pooth, O.; Stahl, A.; Aldaya, M.; Asawatangtrakuldee, C.; Beernaert, K.; Bertsche, D.; Contreras-Campana, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Gallo, E.; Garay Garcia, J.; Hansen, K.; Haranko, M.; Harb, A.; Hauk, J.; Keaveney, J.; Kalogeropoulos, A.; Kleinwort, C.; Lohmann, W.; Mankel, R.; Maser, H.; Mittag, G.; Muhl, C.; Mussgiller, A.; Pitzl, D.; Reichelt, O.; Savitskyi, M.; Schuetze, P.; Walsh, R.; Zuber, A.; Biskop, H.; Buhmann, P.; Centis-Vignali, M.; Garutti, E.; Haller, J.; Hoffmann, M.; Lapsien, T.; Matysek, M.; Perieanu, A.; Scharf, Ch.; Schleper, P.; Schmidt, A.; Schwandt, J.; Sonneveld, J.; Steinbrück, G.; Vormwald, B.; Wellhausen, J.; Abbas, M.; Amstutz, C.; Barvich, T.; Barth, Ch.; Boegelspacher, F.; De Boer, W.; Butz, E.; Caselle, M.; Colombo, F.; Dierlamm, A.; Freund, B.; Hartmann, F.; Heindl, S.; Husemann, U.; Kornmayer, A.; Kudella, S.; Muller, Th.; Simonis, H. J.; Steck, P.; Weber, M.; Weiler, Th.; Anagnostou, G.; Asenov, P.; Assiouras, P.; Daskalakis, G.; Kyriakis, A.; Loukas, D.; Paspalaki, L.; Siklér, F.; Veszprémi, V.; Bhardwaj, A.; Dalal, R.; Jain, G.; Ranjan, K.; Bakhshiansohl, H.; Behnamian, H.; Khakzad, M.; Naseri, M.; Cariola, P.; Creanza, D.; De Palma, M.; De Robertis, G.; Fiore, L.; Franco, M.; Loddo, F.; Silvestris, L.; Maggi, G.; Martiradonna, S.; My, S.; Selvaggi, G.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Saizu, M. A.; Tricomi, A.; Tuve, C.; Barbagli, G.; Brianzi, M.; Ciaranfi, R.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Latino, G.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Scarlini, E.; Sguazzoni, G.; Strom, D.; Viliani, L.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Pedrini, D.; Azzi, P.; Bacchetta, N.; Bisello, D.; Dall'Osso, M.; Pozzobon, N.; Tosi, M.; De Canio, F.; Gaioni, L.; Manghisoni, M.; Nodari, B.; Riceputi, E.; Re, V.; Traversi, G.; Comotti, D.; Ratti, L.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Checcucci, B.; Ciangottini, D.; Fanò, L.; Gentsos, C.; Ionica, M.; Leonardi, R.; Manoni, E.; Mantovani, G.; Marconi, S.; Mariani, V.; Menichelli, M.; Modak, A.; Morozzi, A.; Moscatelli, F.; Passeri, D.; Placidi, P.; Postolache, V.; Rossi, A.; Saha, A.; Santocchia, A.; Storchi, L.; Spiga, D.; Androsov, K.; Azzurri, P.; Arezzini, S.; Bagliesi, G.; Basti, A.; Boccali, T.; Borrello, L.; Bosi, F.; Castaldi, R.; Ciampa, A.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Magazzu, G.; Martini, L.; Mazzoni, E.; Messineo, A.; Moggi, A.; Morsani, F.; Palla, F.; Palmonari, F.; Raffaelli, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Bellan, R.; Costa, M.; Covarelli, R.; Da Rocha Rolo, M.; Demaria, N.; Rivetti, A.; Dellacasa, G.; Mazza, G.; Migliore, E.; Monteil, E.; Pacher, L.; Ravera, F.; Solano, A.; Fernandez, M.; Gomez, G.; Jaramillo Echeverria, R.; Moya, D.; Gonzalez Sanchez, F. J.; Vila, I.; Virto, A. L.; Abbaneo, D.; Ahmed, I.; Albert, E.; Auzinger, G.; Berruti, G.; Bianchi, G.; Blanchot, G.; Bonnaud, J.; Caratelli, A.; Ceresa, D.; Christiansen, J.; Cichy, K.; Daguin, J.; D'Auria, A.; Detraz, S.; Deyrail, D.; Dondelewski, O.; Faccio, F.; Frank, N.; Gadek, T.; Gill, K.; Honma, A.; Hugo, G.; Jara Casas, L. M.; Kaplon, J.; Kornmayer, A.; Kottelat, L.; Kovacs, M.; Krammer, M.; Lenoir, P.; Mannelli, M.; Marchioro, A.; Marconi, S.; Mersi, S.; Martina, S.; Michelis, S.; Moll, M.; Onnela, A.; Orfanelli, S.; Pavis, S.; Peisert, A.; Pernot, J.-F.; Petagna, P.; Petrucciani, G.; Postema, H.; Rose, P.; Tropea, P.; Troska, J.; Tsirou, A.; Vasey, F.; Vichoudis, P.; Verlaat, B.; Zwalinski, L.; Bachmair, F.; Becker, R.; di Calafiori, D.; Casal, B.; Berger, P.; Djambazov, L.; Donega, M.; Grab, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M.; Perozzi, L.; Roeser, U.; Starodumov, A.; Tavolaro, V.; Wallny, R.; Zhu, D.; Amsler, C.; Bösiger, K.; Caminada, L.; Canelli, F.; Chiochia, V.; de Cosa, A.; Galloni, C.; Hreus, T.; Kilminster, B.; Lange, C.; Maier, R.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Taroni, S.; Yang, Y.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Kaestli, H.-C.; Kotlinski, D.; Langenegger, U.; Meier, B.; Rohe, T.; Streuli, S.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Jacob, J.; Seif El Nasr-Storey, S.; Cole, J.; Hoad, C.; Hobson, P.; Morton, A.; Reid, I. D.; Auzinger, G.; Bainbridge, R.; Dauncey, P.; Hall, G.; James, T.; Magnan, A.-M.; Pesaresi, M.; Raymond, D. M.; Uchida, K.; Garabedian, A.; Heintz, U.; Narain, M.; Nelson, J.; Sagir, S.; Speer, T.; Swanson, J.; Tersegno, D.; Watson-Daniels, J.; Chertok, M.; Conway, J.; Conway, R.; Flores, C.; Lander, R.; Pellett, D.; Ricci-Tam, F.; Squires, M.; Thomson, J.; Yohay, R.; Burt, K.; Ellison, J.; Hanson, G.; Olmedo, M.; Si, W.; Yates, B. R.; Gerosa, R.; Sharma, V.; Vartak, A.; Yagil, A.; Zevi Della Porta, G.; Dutta, V.; Gouskos, L.; Incandela, J.; Kyre, S.; Mullin, S.; Patterson, A.; Qu, H.; White, D.; Dominguez, A.; Bartek, R.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Apresyan, A.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cheung, H. W. K.; Chramowicz, J.; Christian, D.; Cooper, W. E.; Deptuch, G.; Derylo, G.; Gingu, C.; Grünendahl, S.; Hasegawa, S.; Hoff, J.; Howell, J.; Hrycyk, M.; Jindariani, S.; Johnson, M.; Kahlid, F.; Lei, C. M.; Lipton, R.; Lopes De Sá, R.; Liu, T.; Los, S.; Matulik, M.; Merkel, P.; Nahn, S.; Prosser, A.; Rivera, R.; Schneider, B.; Sellberg, G.; Shenai, A.; Spiegel, L.; Tran, N.; Uplegger, L.; Voirin, E.; Berry, D. R.; Chen, X.; Ennesser, L.; Evdokimov, A.; Evdokimov, O.; Gerber, C. E.; Hofman, D. J.; Makauda, S.; Mills, C.; Sandoval Gonzalez, I. D.; Alimena, J.; Antonelli, L. J.; Francis, B.; Hart, A.; Hill, C. S.; Parashar, N.; Stupak, J.; Bortoletto, D.; Bubna, M.; Hinton, N.; Jones, M.; Miller, D. H.; Shi, X.; Tan, P.; Baringer, P.; Bean, A.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Wilson, G.; Ivanov, A.; Mendis, R.; Mitchell, T.; Skhirtladze, N.; Taylor, R.; Anderson, I.; Fehling, D.; Gritsan, A.; Maksimovic, P.; Martin, C.; Nash, K.; Osherson, M.; Swartz, M.; Xiao, M.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Monroy, J.; Siado, J.; Hahn, K.; Sevova, S.; Sung, K.; Trovato, M.; Bartz, E.; Gershtein, Y.; Halkiadakis, E.; Kyriacou, S.; Lath, A.; Nash, K.; Osherson, M.; Schnetzer, S.; Stone, R.; Walker, M.; Malik, S.; Norberg, S.; Ramirez Vargas, J. E.; Alyari, M.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kharchilava, A.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; McDermott, K.; Mirman, N.; Rinkevicius, A.; Ryd, A.; Salvati, E.; Skinnari, L.; Soffi, L.; Tao, Z.; Thom, J.; Tucker, J.; Zientek, M.; Akgün, B.; Ecklund, K. M.; Kilpatrick, M.; Nussbaum, T.; Zabel, J.; Betchart, B.; Covarelli, R.; Demina, R.; Hindrichs, O.; Petrillo, G.; Eusebi, R.; Osipenkov, I.; Perloff, A.; Ulmer, K. A.
2017-06-01
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. This paper describes the main measurement results and conclusions that motivated this decision.
Introduction to the HL-LHC Project
NASA Astrophysics Data System (ADS)
Rossi, L.; Brüning, O.
The Large Hadron Collider (LHC) is one of largest scientific instruments ever built. It has been exploring the new energy frontier since 2010, gathering a global user community of 7,000 scientists. To extend its discovery potential, the LHC will need a major upgrade in the 2020s to increase its luminosity (rate of collisions) by a factor of five beyond its design value and the integrated luminosity by a factor of ten. As a highly complex and optimized machine, such an upgrade of the LHC must be carefully studied and requires about ten years to implement. The novel machine configuration, called High Luminosity LHC (HL-LHC), will rely on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 11-12 tesla superconducting magnets, very compact superconducting cavities for beam rotation with ultra-precise phase control, new technology for beam collimation and 300-meter-long high-power superconducting links with negligible energy dissipation. HL-LHC federates efforts and R&D of a large community in Europe, in the US and in Japan, which will facilitate the implementation of the construction phase as a global project.
Analysis of the SPS Long Term Orbit Drifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velotti, Francesco; Bracco, Chiara; Cornelis, Karel
2016-06-01
The Super Proton Synchrotron (SPS) is the last accelerator in the Large Hadron Collider (LHC) injector chain, and has to deliver the two high-intensity 450 GeV proton beams to the LHC. The transport from SPS to LHC is done through the two Transfer Lines (TL), TI2 and TI8, for Beam 1 (B1) and Beam 2 (B2) respectively. During the first LHC operation period Run 1, a long term drift of the SPS orbit was observed, causing changes in the LHC injection due to the resulting changes in the TL trajectories. This translated into longer LHC turnaround because of the necessitymore » to periodically correct the TL trajectories in order to preserve the beam quality at injection into the LHC. Different sources for the SPS orbit drifts have been investigated: each of them can account only partially for the total orbit drift observed. In this paper, the possible sources of such drift are described, together with the simulated and measured effect they cause. Possible solutions and countermeasures are also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Adriana; et al.
Long-range beam-beam (LRBB) interactions can be a source of emittance growth and beam losses in the LHC during physics and will become even more relevant with the smaller '* and higher bunch intensities foreseen for the High Luminosity LHC upgrade (HL-LHC), in particular if operated without crab cavities. Both beam losses and emittance growth could be mitigated by compensat-ing the non-linear LRBB kick with a correctly placed current carrying wire. Such a compensation scheme is currently being studied in the LHC through a demonstration test using current-bearing wires embedded into col-limator jaws, installed either side of the high luminosity interactionmore » regions. For HL-LHC two options are considered, a current-bearing wire as for the demonstrator, or electron lenses, as the ideal distance between the particle beam and compensating current may be too small to allow the use of solid materials. This paper reports on the ongoing activities for both options, covering the progress of the wire-in-jaw collimators, the foreseen LRBB experiments at the LHC, and first considerations for the design of the electron lenses to ultimately replace material wires for HL-LHC.« less
Search for New Phenomena Using W/Z + (b)-Jets Measurements Performed with the ATLAS Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beauchemin, Pierre-Hugues
2015-06-30
The Project proposed to use data of the ATLAS experiment, obtained during the 2011 and 2012 data-taking campaigns, to pursue studies of the strong interaction (QCD) and to examine promising signatures for new physics. The Project also contains a service component dedicated to a detector development initiative. The objective of the strong interaction studies is to determine how various predictions from the main theory (QCD) compare to the data. Results of a set of measurements developed by the Tufts team indicate that the dominant factor of discrepancy between data and QCD predictions come from the mis-modeling of the low energymore » gluon radiation as described by algorithms called parton showers. The discrepancies introduced by parton showers on LHC predictions could even be larger than the effect due to completely new phenomena (dark matter, supersymmetry, etc.) and could thus block further discoveries at the LHC. Some of the results obtained in the course of this Project also specify how QCD predictions must be improved in order to open the possibility for the discovery of something completely new at the LHC during Run-II. This has been integrated in the Run-II ATLAS physics program. Another objective of Tufts studies of the strong interaction was to determine how the hypothesis about an intrinsic heavy-quark component of the proton (strange, charm or bottom quarks) could be tested at the LHC. This hypothesis has been proposed by theorists 30 years ago and is still controversial. The Tufts team demonstrated that intrinsic charms can be observed, or severely constrained, at the LHC, and determine how the measurement should be performed in order to maximize its sensitivity to such an intrinsic heavy-quark component of the proton. Tufts also embarked on performing the measurement that is in progress, but final results are not yet available. They should shade a light of understanding on the fundamental structure of the proton. Determining the nature of dark matter particles, composing about 25% of all the matter in the universe, is one of the most exciting research goals at the LHC. Within this Project, the Tufts team proposed a way to improve over the standard approach used to look for dark matter at the LHC in events involving jets and a large amount of unbalanced energy in the detector (jets+ETmiss). The Tufts team has developed a measurement to test these improvements on data available (ATLAS 2012 dataset), in order to be ready to apply them on the new Run-II data that will be available at the end of 2015. Preliminary results on the proposed measurement indicate that a very high precision can be obtained on results free of detector effects. That will allow for better constrains of dark matter theories and will spare the needs for huge computing resources in order to compare dark matter theories to data. Finally, the Tufts team played a leading role in the development and the organization of the 6Et trigger, the detector component needed to collect the data used in dark matter searches and in many other analyses. The team compared the performance of the various algorithms capable of reconstructing the value of the ETmiss on each LHC collision event, and developed a strategy to commission these algorithms online. Tufts also contributed in the development of the ETmiss trigger monitoring software. Finally, the PI of this Project acted as the co-coordinator of the group of researchers at CERN taking care of the development and the operation of this detector component. The ETmiss trigger is now taking data, opening the possibility for the discovery of otherwise undetectable particles at the LHC.« less
The HL-LHC Accelerator Physics Challenges
NASA Astrophysics Data System (ADS)
Fartoukh, S.; Zimmermann, F.
The conceptual baseline of the HL-LHC project is reviewed, putting into perspective the main beam physics challenges of this new collider in comparison with the existing LHC, and the series of solutions and possible mitigation measures presently envisaged.
Impact of a CP-violating Higgs sector: from LHC to baryogenesis.
Shu, Jing; Zhang, Yue
2013-08-30
We observe a generic connection between LHC Higgs data and electroweak baryogenesis: the particle that contributes to the CP-odd hgg or hγγ vertex would provide the CP-violating source during a first-order phase transition. It is illustrated in the two Higgs doublet model that a common complex phase controls the lightest Higgs properties at the LHC, electric dipole moments, and the CP-violating source for electroweak baryogenesis. We perform a general parametrization of Higgs effective couplings and a global fit to the LHC Higgs data. Current LHC measurements prefer a nonzero phase for tanβ≲1 and electric dipole moment constraints still allow an order-one phase for tanβ∼1, which gives sufficient room to generate the correct cosmic baryon asymmetry. We also give some prospects in the direct measurements of CP violation in the Higgs sector at the LHC.
Radiation Hard Silicon Particle Detectors for Phase-II LHC Trackers
NASA Astrophysics Data System (ADS)
Oblakowska-Mucha, A.
2017-02-01
The major LHC upgrade is planned after ten years of accelerator operation. It is foreseen to significantly increase the luminosity of the current machine up to 1035 cm-2s-1 and operate as the upcoming High Luminosity LHC (HL-LHC) . The major detectors upgrade, called the Phase-II Upgrade, is also planned, a main reason being the aging processes caused by severe particle radiation. Within the RD50 Collaboration, a large Research and Development program has been underway to develop silicon sensors with sufficient radiation tolerance for HL-LHC trackers. In this summary, several results obtained during the testing of the devices after irradiation to HL-LHC levels are presented. Among the studied structures, one can find advanced sensors types like 3D silicon detectors, High-Voltage CMOS technologies, or sensors with intrinsic gain (LGAD). Based on these results, the RD50 Collaboration gives recommendation for the silicon detectors to be used in the detector upgrade.
Time-Critical Database Conditions Data-Handling for the CMS Experiment
NASA Astrophysics Data System (ADS)
De Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2011-08-01
Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
Adam, W.; Bergauer, T.; Brondolin, E.; ...
2017-06-27
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. Furthermore, this paper describes the main measurement results and conclusions thatmore » motivated this decision.« less
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, W.; Bergauer, T.; Brondolin, E.
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. Furthermore, this paper describes the main measurement results and conclusions thatmore » motivated this decision.« less
Phylogenetic analysis of the light-harvesting system in Chromera velia.
Pan, Hao; Slapeta, Jan; Carter, Dee; Chen, Min
2012-03-01
Chromera velia is a newly discovered photosynthetic eukaryotic alga that has functional chloroplasts closely related to the apicoplast of apicomplexan parasites. Recently, the chloroplast in C. velia was shown to be derived from the red algal lineage. Light-harvesting protein complexes (LHC), which are a group of proteins involved in photon capture and energy transfer in photosynthesis, are important for photosynthesis efficiency, photo-adaptation/accumulation and photo-protection. Although these proteins are encoded by genes located in the nucleus, LHC peptides migrate and function in the chloroplast, hence the LHC may have a different evolutionary history compared to chloroplast evolution. Here, we compare the phylogenetic relationship of the C. velia LHCs to LHCs from other photosynthetic organisms. Twenty-three LHC homologues retrieved from C. velia EST sequences were aligned according to their conserved regions. The C. velia LHCs are positioned in four separate groups on trees constructed by neighbour-joining, maximum likelihood and Bayesian methods. A major group of seventeen LHCs from C. velia formed a separate cluster that was closest to dinoflagellate LHC, and to LHC and fucoxanthin chlorophyll-binding proteins from diatoms. One C. velia LHC sequence grouped with LI1818/LI818-like proteins, which were recently identified as environmental stress-induced protein complexes. Only three LHC homologues from C. velia grouped with the LHCs from red algae.
Kokouva, Maria; Bitsolas, Nikolaos; Hadjigeorgiou, Georgios M; Rachiotis, George; Papadoulis, Nikolaos; Hadjichristodoulou, Christos
2011-01-04
The causality of lymphohaematopoietic cancers (LHC) is multifactorial and studies investigating the association between chemical exposure and LHC have produced variable results. The aim of this study was to investigate the relationships between exposure to pesticides and LHC in an agricultural region of Greece. A structured questionnaire was employed in a hospital-based case control study to gather information on demographics, occupation, exposure to pesticides, agricultural practices, family and medical history and smoking. To control for confounders, backward conditional and multinomial logistic regression analyses were used. To assess the dose-response relationship between exposure and disease, the chi-square test for trend was used. Three hundred and fifty-four (354) histologically confirmed LHC cases diagnosed from 2004 to 2006 and 455 sex- and age-matched controls were included in the study. Pesticide exposure was associated with total LHC cases (OR 1.46, 95% CI 1.05-2.04), myelodysplastic syndrome (MDS) (OR 1.87, 95% CI 1.00-3.51) and leukaemia (OR 2.14, 95% CI 1.09-4.20). A dose-response pattern was observed for total LHC cases (P = 0.004), MDS (P = 0.024) and leukaemia (P = 0.002). Pesticide exposure was independently associated with total LHC cases (OR 1.41, 95% CI 1.00 - 2.00) and leukaemia (OR 2.05, 95% CI 1.02-4.12) after controlling for age, smoking and family history (cancers, LHC and immunological disorders). Smoking during application of pesticides was strongly associated with total LHC cases (OR 3.29, 95% CI 1.81-5.98), MDS (OR 3.67, 95% CI 1.18-12.11), leukaemia (OR 10.15, 95% CI 2.15-65.69) and lymphoma (OR 2.72, 95% CI 1.02-8.00). This association was even stronger for total LHC cases (OR 18.18, 95% CI 2.38-381.17) when eating simultaneously with pesticide application. Lymphohaematopoietic cancers were associated with pesticide exposure after controlling for confounders. Smoking and eating during pesticide application were identified as modifying factors increasing the risk for LHC. The poor pesticide work practices identified during this study underline the need for educational campaigns for farmers.
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
A browser-based event display for the CMS experiment at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hategan, M.; McCauley, T.; Nguyen, P.
2012-01-01
The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to deploy deploy tools in a highly-restrictive computing environment. The I2U2 collaboration has developed a browser-based event display for viewing events in data collected and released to the public by the CMS experiment at the LHC. The application itself reads a JSON event format and uses the JavaScript 3D rendering engine pre3d.more » The only requirement is a modern browser using HTML5 canvas. The event display has been used by thousands of high school students in the context of programs organized by I2U2, QuarkNet, and IPPOG. This browser-based approach to display of events can have broader usage and impact for experts and public alike.« less
NASA Astrophysics Data System (ADS)
Nefedov, M. A.; Saleev, V. A.
2015-11-01
The hadroproduction of prompt isolated photon pairs at high energies is studied in the framework of the parton Reggeization approach. The real part of the NLO corrections is computed (the NLO⋆ approximation), and the procedure for the subtraction of double counting between real parton emissions in the hard-scattering matrix element and unintegrated parton distribution function is constructed for the amplitudes with Reggeized quarks in the initial state. The matrix element of the important next-to-next-to-leading-order subprocess R R →γ γ with full dependence on the transverse momenta of the initial-state Reggeized gluons is obtained. We compare obtained numerical results with diphoton spectra measured at the Tevatron and the LHC and find a good agreement of our predictions with experimental data at the high values of diphoton transverse momentum, pT, and especially at the pT larger than the diphoton invariant mass, M . In this multi-Regge kinematics region, the NLO correction is strongly suppressed, demonstrating the self-consistency of the parton Reggeization approach.
Named Data Networking in Climate Research and HEP Applications
NASA Astrophysics Data System (ADS)
Shannigrahi, Susmit; Papadopoulos, Christos; Yeh, Edmund; Newman, Harvey; Jerzy Barczyk, Artur; Liu, Ran; Sim, Alex; Mughal, Azher; Monga, Inder; Vlimant, Jean-Roch; Wu, John
2015-12-01
The Computing Models of the LHC experiments continue to evolve from the simple hierarchical MONARC[2] model towards more agile models where data is exchanged among many Tier2 and Tier3 sites, relying on both large scale file transfers with strategic data placement, and an increased use of remote access to object collections with caching through CMS's AAA, ATLAS' FAX and ALICE's AliEn projects, for example. The challenges presented by expanding needs for CPU, storage and network capacity as well as rapid handling of large datasets of file and object collections have pointed the way towards future more agile pervasive models that make best use of highly distributed heterogeneous resources. In this paper, we explore the use of Named Data Networking (NDN), a new Internet architecture focusing on content rather than the location of the data collections. As NDN has shown considerable promise in another data intensive field, Climate Science, we discuss the similarities and differences between the Climate and HEP use cases, along with specific issues HEP faces and will face during LHC Run2 and beyond, which NDN could address.
Commissioning the cryogenic system of the first LHC sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millet, F.; Claudet, S.; Ferlin, G.
2007-12-01
The LHC machine, composed of eight sectors with superconducting magnets and accelerating cavities, requires a complex cryogenic system providing high cooling capacities (18 kW equivalent at 4.5 K and 2.4 W at 1.8 K per sector produced in large cold boxes and distributed via 3.3-km cryogenic transfer lines). After individual reception tests of the cryogenic subsystems (cryogen storages, refrigerators, cryogenic transfer lines and distribution boxes) performed since 2000, the commissioning of the cryogenic system of the first LHC sector has been under way since November 2006. After a brief introduction to the LHC cryogenic system and its specificities, the commissioningmore » is reported detailing the preparation phase (pressure and leak tests, circuit conditioning and flushing), the cool-down sequences including the handling of cryogenic fluids, the magnet powering phase and finally the warm-up. Preliminary conclusions on the commissioning of the first LHC sector will be drawn with the review of the critical points already solved or still pending. The last part of the paper reports on the first operational experience of the LHC cryogenic system in the perspective of the commissioning of the remaining LHC sectors and the beam injection test.« less
Tracking at High Level Trigger in CMS
NASA Astrophysics Data System (ADS)
Tosi, M.
2016-04-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava
2017-01-01
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particlemore » tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.« less
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; Masciovecchio, Mario; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2017-08-01
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.
Zittrain, Jonathan
2008-10-28
Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a drawing pin and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This paper explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.
GSDC: A Unique Data Center in Korea for HEP research
NASA Astrophysics Data System (ADS)
Ahn, Sang-Un
2017-04-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.
WORLDWIDE COLLECTION AND EVALUATION OF EARTHQUAKE DATA
period, the hypocenter and magnitude programs were tested and then used to process January 1964 data at the computer facilities of the Environmental ... Science Services Administration (ESSA), Suitland, Maryland, using the CDC 6600 computer. Results of this processing are shown.
Overview of LHC physics results at ICHEP
Mangano, Michelangelo
2018-06-20
This month LHC physics day will review the physics results presented by the LHC experiments at the 2010 ICHEP in Paris. The experimental presentations will be preceeded by the bi-weekly LHC accelerator status report.The meeting will be broadcast via EVO (detailed info will appear at the time of the meeting in the "Video Services" item on the left menu bar). For those attending, information on accommodation, access to CERN and laptop registration is available from http://cern.ch/lpcc/visits
Design and prototyping of HL-LHC double quarter wave crab cavities for SPS test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdu-Andres, S.; Skaritka, J.; Wu, Q.
2015-05-03
The LHC high luminosity project envisages the use of the crabbing technique for increasing and levelling the LHC luminosity. Double Quarter Wave (DQW) resonators are compact cavities especially designed to meet the technical and performance requirements for LHC beam crabbing. Two DQW crab cavities are under fabrication and will be tested with beam in the Super Proton Synchrotron (SPS) at CERN by 2017. This paper describes the design and prototyping of the DQW crab cavities for the SPS test.
Overview of LHC physics results at ICHEP
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-02-25
This month LHC physics day will review the physics results presented by the LHC experiments at the 2010 ICHEP in Paris. The experimental presentations will be preceeded by the bi-weekly LHC accelerator status report.The meeting will be broadcast via EVO (detailed info will appear at the time of the meeting in the "Video Services" item on the left menu bar)For those attending, information on accommodation, access to CERN and laptop registration is available from http://cern.ch/lpcc/visits
The Large Hadron Collider (LHC): The Energy Frontier
NASA Astrophysics Data System (ADS)
Brianti, Giorgio; Jenni, Peter
The following sections are included: * Introduction * Superconducting Magnets: Powerful, Precise, Plentiful * LHC Cryogenics: Quantum Fluids at Work * Current Leads: High Temperature Superconductors to the Fore * A Pumping Vacuum Chamber: Ultimate Simplicity * Vertex Detectors at LHC: In Search of Beauty * Large Silicon Trackers: Fast, Precise, Efficient * Two Approaches to High Resolution Electromagnetic Calorimetry * Multigap Resistive Plate Chamber: Chronometry of Particles * The LHCb RICH: The Lord of the Cherenkov Rings * Signal Processing: Taming the LHC Data Avalanche * Giant Magnets for Giant Detectors
Calibration techniques and strategies for the present and future LHC electromagnetic calorimeters
NASA Astrophysics Data System (ADS)
Aleksa, M.
2018-02-01
This document describes the different calibration strategies and techniques applied by the two general purpose experiments at the LHC, ATLAS and CMS, and discusses them underlining their respective strengths and weaknesses from the view of the author. The resulting performances of both calorimeters are described and compared on the basis of selected physics results. Future upgrade plans for High Luminosity LHC (HL-LHC) are briefly introduced and planned calibration strategies for the upgraded detectors are shown.
Supersymmetry Breaking, Gauge Mediation, and the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shih, David
2015-04-14
Gauge mediated SUSY breaking (GMSB) is a promising class of supersymmetric models that automatically satisfies the precision constraints. Prior work of Meade, Seiberg and Shih in 2008 established the full, model-independent parameter space of GMSB, which they called "General Gauge Mediation" (GGM). During the first half of 2010-2015, Shih and his collaborators thoroughly explored the parameter space of GGM and established many well-motivated benchmark models for use by the experimentalists at the LHC. Through their work, the current constraints on GGM from LEP, the Tevatron and the LHC were fully elucidated, together with the possible collider signatures of GMSB atmore » the LHC. This ensured that the full discovery potential for GGM could be completely realized at the LHC.« less
Dreuw, Andreas; Wormit, Michael
2008-03-01
Recently, a mechanism for the energy-dependent component (qE) of non-photochemical quenching (NPQ), the fundamental photo-protection mechanism in green plants, has been suggested. Replacement of violaxanthin by zeaxanthin in the binding pocket of the major light harvesting complex LHC-II may be sufficient to invoke efficient chlorophyll fluorescence quenching. Our quantum chemical calculations, however, show that the excited state energies of violaxanthin and zeaxanthin are practically identical when their geometry is constrained to the naturally observed structure of violaxanthin in LHC-II. Therefore, since violaxanthin does not quench LHC-II, zeaxanthin should not either. This theoretical finding is nicely in agreement with experimental results obtained by femtosecond spectroscopy on LHC-II complexes containing violaxanthin or zeaxanthin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boveia, Antonio; Buchmueller, Oliver; Busoni, Giorgio
2016-03-14
This document summarises the proposal of the LHC Dark Matter Working Group on how to present LHC results on s-channel simplified dark matter models and to compare them to direct (indirect) detection experiments.
High Luminosity LHC: challenges and plans
NASA Astrophysics Data System (ADS)
Arduini, G.; Barranco, J.; Bertarelli, A.; Biancacci, N.; Bruce, R.; Brüning, O.; Buffat, X.; Cai, Y.; Carver, L. R.; Fartoukh, S.; Giovannozzi, M.; Iadarola, G.; Li, K.; Lechner, A.; Medina Medrano, L.; Métral, E.; Nosochkov, Y.; Papaphilippou, Y.; Pellegrini, D.; Pieloni, T.; Qiang, J.; Redaelli, S.; Romano, A.; Rossi, L.; Rumolo, G.; Salvant, B.; Schenk, M.; Tambasco, C.; Tomás, R.; Valishev, S.; Van der Veken, F. F.
2016-12-01
The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will undergo a major upgrade in the 2020s. This will increase its rate of collisions by a factor of five beyond the original design value and the integrated luminosity by a factor ten. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 T superconducting magnets, including Nb3Sn-based magnets never used in accelerators before, compact superconducting cavities for longitudinal beam rotation, new technology and physical processes for beam collimation. The dynamics of the HL-LHC beams will be also particularly challenging and this aspect is the main focus of this paper.
Stepping outside the neighborhood of T at LHC
NASA Astrophysics Data System (ADS)
Wiedemann, Urs Achim
2009-11-01
“ As you are well aware, many in the RHIC community are interested in the LHC heavy-ion program, but have several questions: What can we learn at the LHC that is qualitatively new? Are collisions at LHC similar to RHIC ones, just with a somewhat hotter/denser initial state? If not, why not? These questions are asked in good faith, and this talk is an opportunity to answer them directly to much of the RHIC community.” With these words, the organizers of Quark Matter 2009 in Knoxville invited me to discuss the physics opportunities for heavy ion collisions at the LHC without recalling the standard arguments, which are mainly based on the extended kinematic reach of the machine. In response, I emphasize here that lattice QCD indicates characteristic qualitative differences between thermal physics in the neighborhood of the critical temperature (T
Fikowski, Jill; Marchand, Kirsten; Palis, Heather; Oviedo-Joekes, Eugenia
2014-01-01
Uncovering patterns of drug use and treatment access is essential to improving treatment for opioid dependence. The life history calendar (LHC) could be a valuable instrument for capturing time-sensitive data on lifetime patterns of drug use and addiction treatment. This study describes the methodology applied when collecting data using the LHC in a sample of individuals with long-term opioid dependence and aims to identify specific factors that impact the feasibility of administering the LHC interview. In this study, the LHC allowed important events such as births, intimate relationships, housing, or incarcerations to become reference points for recalling details surrounding drug use and treatment access. The paper concludes that the administration of the LHC was a resource-intensive process and required special attention to interviewer training and experience with the study population. These factors should be considered and integrated into study plans by researchers using the LHC in addiction research.
FLUKA Monte Carlo simulations and benchmark measurements for the LHC beam loss monitors
NASA Astrophysics Data System (ADS)
Sarchiapone, L.; Brugger, M.; Dehning, B.; Kramer, D.; Stockner, M.; Vlachoudis, V.
2007-10-01
One of the crucial elements in terms of machine protection for CERN's Large Hadron Collider (LHC) is its beam loss monitoring (BLM) system. On-line loss measurements must prevent the superconducting magnets from quenching and protect the machine components from damages due to unforeseen critical beam losses. In order to ensure the BLM's design quality, in the final design phase of the LHC detailed FLUKA Monte Carlo simulations were performed for the betatron collimation insertion. In addition, benchmark measurements were carried out with LHC type BLMs installed at the CERN-EU high-energy Reference Field facility (CERF). This paper presents results of FLUKA calculations performed for BLMs installed in the collimation region, compares the results of the CERF measurement with FLUKA simulations and evaluates related uncertainties. This, together with the fact that the CERF source spectra at the respective BLM locations are comparable with those at the LHC, allows assessing the sensitivity of the performed LHC design studies.
Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider
NASA Astrophysics Data System (ADS)
Yee-Rendon, Bruce; Lopez-Fernandez, Ricardo; Barranco, Javier; Calaga, Rama; Marsili, Aurelien; Tomás, Rogelio; Zimmermann, Frank; Bouly, Frédéric
2014-05-01
Crab cavities (CCs) are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC) project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures.
The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rojo, Juan; Accardi, Alberto; Ball, Richard D.
2015-09-16
The accurate determination of Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterization and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarize the information that PDF-sensitive measurements at the LHC have provided somore » far, and review the prospects for further constraining PDFs with data from the recently started Run II. As a result, this document aims to provide useful input to the LHC collaborations to prioritize their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.« less
Lhc proteins and the regulation of photosynthetic light harvesting function by xanthophylls.
Bassi, R; Caffarri, S
2000-01-01
Photoprotection of the chloroplast is an important component of abiotic stress resistance in plants. Carotenoids have a central role in photoprotection. We review here the recent evidence, derived mainly from in vitro reconstitution of recombinant Lhc proteins with different carotenoids and from carotenoid biosynthesis mutants, for the existence of different mechanisms of photoprotection and regulation based on xanthophyll binding to Lhc proteins into multiple sites and the exchange of chromophores between different Lhc proteins during exposure of plants to high light stress and the operation of the xanthophyll cycle. The use of recombinant Lhc proteins has revealed up to four binding sites in members of Lhc families with distinct selectivity for xanthophyll species which are here hypothesised to have different functions. Site L1 is selective for lutein and is here proposed to be essential for catalysing the protection from singlet oxygen by quenching chlorophyll triplets. Site L2 and N1 are here proposed to act as allosteric sites involved in the regulation of chlorophyll singlet excited states by exchanging ligand during the operation of the xanthophyll cycle. Site V1 of the major antenna complex LHC II is here hypothesised to be a deposit for readily available substrate for violaxanthin de-epoxidase rather than a light harvesting pigment. Moreover, xanthophylls bound to Lhc proteins can be released into the lipid bilayer where they contribute to the scavenging of reactive oxygen species produced in excess light.
Exploiting volatile opportunistic computing resources with Lobster
NASA Astrophysics Data System (ADS)
Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2015-12-01
Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.
Challenges in scaling NLO generators to leadership computers
NASA Astrophysics Data System (ADS)
Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.
2017-10-01
Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
Distributed storage and cloud computing: a test case
NASA Astrophysics Data System (ADS)
Piano, S.; Delia Ricca, G.
2014-06-01
Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.
Commissioning of the cryogenics of the LHC long straight sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perin, A.; Casas-Cubillos, J.; Claudet, S.
2010-01-01
The LHC is made of eight circular arcs interspaced with eight Long Straight Sections (LSS). Most powering interfaces to the LHC are located in these sections where the particle beams are focused and shaped for collision, cleaning and acceleration. The LSSs are constituted of several unique cryogenic devices and systems like electrical feed-boxes, standalone superconducting magnets, superconducting links, RF cavities and final focusing superconducting magnets. This paper presents the cryogenic commissioning and the main results obtained during the first operation of the LHC Long Straight Sections.
Detector Developments for the High Luminosity LHC Era (1/4)
Straessner, Arno
2018-04-27
Calorimetry and Muon Spectrometers - Part I : In the first part of the lecture series, the motivation for a high luminosity upgrade of the LHC will be quickly reviewed together with the challenges for the LHC detectors. In particular, the plans and ongoing research for new calorimeter detectors will be explained. The main issues in the high-luminosity era are an improved radiation tolerance, natural ageing of detector components and challenging trigger and physics requirements. The new technological solutions for calorimetry at a high-luminosity LHC will be reviewed.
Torsion limits from t t macr production at the LHC
NASA Astrophysics Data System (ADS)
de Almeida, F. M. L.; de Andrade, F. R.; do Vale, M. A. B.; Nepomuceno, A. A.
2018-04-01
Torsion models constitute a well-known class of extended quantum gravity models. In this work, one investigates the phenomenological consequences of a torsion field interacting with top quarks at the LHC. A torsion field could appear as a new heavy state characterized by its mass and couplings to fermions. This new state would form a resonance decaying into a top antitop pair. The latest ATLAS t t ¯ production results from LHC 13 TeV data are used to set limits on torsion parameters. The integrated luminosity needed to observe torsion resonance at the next LHC upgrades are also evaluated, considering different values for the torsion mass and its couplings to Standard Model fermions. Finally, prospects for torsion exclusion at the future LHC phases II and III are obtained using fast detector simulations.
C P -violation in the two Higgs doublet model: From the LHC to EDMs
NASA Astrophysics Data System (ADS)
Chen, Chien-Yi; Li, Hao-Lin; Ramsey-Musolf, Michael
2018-01-01
We study the prospective sensitivity to C P -violating two Higgs doublet models from the 14 TeV LHC and future electric dipole moment (EDM) experiments. We concentrate on the search for a resonant heavy Higgs that decays to a Z boson and a SM-like Higgs h , leading to the Z (ℓℓ)h (b b ¯ ) final state. The prospective LHC reach is analyzed using the Boosted Decision Tree method. We illustrate the complementarity between the LHC and low energy EDM measurements and study the dependence of the physics reach on the degree of deviation from the alignment limit. In all cases, we find that there exists a large part of parameter space that is sensitive to both EDMs and LHC searches.
Support Structure Design of the $$\\hbox{Nb}_{3}\\hbox{Sn}$$ Quadrupole for the High Luminosity LHC
Juchno, M.; Ambrosio, G.; Anerella, M.; ...
2014-10-31
New low-β quadrupole magnets are being developed within the scope of the High Luminosity LHC (HL-LHC) project in collaboration with the US LARP program. The aim of the HLLHC project is to study and implement machine upgrades necessary for increasing the luminosity of the LHC. The new quadrupoles, which are based on the Nb₃Sn superconducting technology, will be installed in the LHC Interaction Regions and will have to generate a gradient of 140 T/m in a coil aperture of 150 mm. In this paper, we describe the design of the short model magnet support structure and discuss results of themore » detailed 3D numerical analysis performed in preparation for the first short model test.« less
Experiential learning in high energy physics: a survey of students at the LHC
NASA Astrophysics Data System (ADS)
Camporesi, Tiziano; Catalano, Gelsomina; Florio, Massimo; Giffoni, Francesco
2017-03-01
More than 36 000 students and post-docs will be involved until 2025 in research at the Large Hadron Collider (LHC) mainly through international collaborations. To what extent they value the skills acquired? Do students expect that their learning experience will have an impact on their professional future? By drawing from earlier literature on experiential learning, we have designed a survey of current and former students at LHC. To quantitatively measure the students’ perceptions, we compare the salary expectations of current students with the assessment of those now employed in different jobs. Survey data are analysed by ordered logistic regression models, which allow multivariate statistical analyses with limited dependent variables. Results suggest that experiential learning at LHC positively correlates with both current and former students’ salary expectations. Those already employed clearly confirm the expectations of current students. At least two not mutually exclusive explanations underlie the results. First, the training at LHC is perceived to provide students valuable skills, which in turn affect the salary expectations; secondly, the LHC research experience per se may act as signal in the labour market. Respondents put a price tag on their learning experience, a ‘LHC salary premium’ ranging from 5% to 12% compared with what they would have expected for their career without such an experience at CERN.
Geometric beam coupling impedance of LHC secondary collimators
NASA Astrophysics Data System (ADS)
Frasciello, Oscar; Tomassini, Sandro; Zobov, Mikhail; Salvant, Benoit; Grudiev, Alexej; Mounet, Nicolas
2016-02-01
The High Luminosity LHC project is aimed at increasing the LHC luminosity by an order of magnitude. One of the key ingredients to achieve the luminosity goal is the beam intensity increase. In order to keep beam instabilities under control and to avoid excessive power losses a careful design of new vacuum chamber components and an improvement of the present LHC impedance model are required. Collimators are among the major impedance contributors. Measurements with beam have revealed that the betatron coherent tune shifts were higher by about a factor of 2 with respect to the theoretical predictions based on the LHC impedance model up to 2012. In that model the resistive wall impedance has been considered as the dominating impedance contribution for collimators. By carefully simulating also their geometric impedance we have contributed to the update of the LHC impedance model, reaching also a better agreement between the measured and simulated betatron tune shifts. During the just ended LHC Long Shutdown I (LSI), TCS/TCT collimators were replaced by new devices embedding BPMs and TT2-111R ferrite blocks. We present here preliminary estimations of their broad-band impedance, showing that an increase of about 20% is expected in the kick factors with respect to previous collimators without BPMs.
2011-01-01
Background The causality of lymphohaematopoietic cancers (LHC) is multifactorial and studies investigating the association between chemical exposure and LHC have produced variable results. The aim of this study was to investigate the relationships between exposure to pesticides and LHC in an agricultural region of Greece. Methods A structured questionnaire was employed in a hospital-based case control study to gather information on demographics, occupation, exposure to pesticides, agricultural practices, family and medical history and smoking. To control for confounders, backward conditional and multinomial logistic regression analyses were used. To assess the dose-response relationship between exposure and disease, the chi-square test for trend was used. Results Three hundred and fifty-four (354) histologically confirmed LHC cases diagnosed from 2004 to 2006 and 455 sex- and age-matched controls were included in the study. Pesticide exposure was associated with total LHC cases (OR 1.46, 95% CI 1.05-2.04), myelodysplastic syndrome (MDS) (OR 1.87, 95% CI 1.00-3.51) and leukaemia (OR 2.14, 95% CI 1.09-4.20). A dose-response pattern was observed for total LHC cases (P = 0.004), MDS (P = 0.024) and leukaemia (P = 0.002). Pesticide exposure was independently associated with total LHC cases (OR 1.41, 95% CI 1.00 - 2.00) and leukaemia (OR 2.05, 95% CI 1.02-4.12) after controlling for age, smoking and family history (cancers, LHC and immunological disorders). Smoking during application of pesticides was strongly associated with total LHC cases (OR 3.29, 95% CI 1.81-5.98), MDS (OR 3.67, 95% CI 1.18-12.11), leukaemia (OR 10.15, 95% CI 2.15-65.69) and lymphoma (OR 2.72, 95% CI 1.02-8.00). This association was even stronger for total LHC cases (OR 18.18, 95% CI 2.38-381.17) when eating simultaneously with pesticide application. Conclusions Lymphohaematopoietic cancers were associated with pesticide exposure after controlling for confounders. Smoking and eating during pesticide application were identified as modifying factors increasing the risk for LHC. The poor pesticide work practices identified during this study underline the need for educational campaigns for farmers. PMID:21205298
Factors Affecting Career Choice: Comparison between Students from Computer and Other Disciplines
ERIC Educational Resources Information Center
Alexander, P. M.; Holmner, M.; Lotriet, H. H.; Matthee, M. C.; Pieterse, H. V.; Naidoo, S.; Twinomurinzi, H.; Jordaan, D.
2011-01-01
The number of student enrolments in computer-related courses remains a serious concern worldwide with far reaching consequences. This paper reports on an extensive survey about career choice and associated motivational factors amongst new students, only some of whom intend to major in computer-related courses, at two South African universities.…
A Summer Research Experience in Particle Physics Using Skype
NASA Astrophysics Data System (ADS)
Johnston, Curran; Alexander, Steven; Mahmood, A. K.
2012-10-01
This last summer I did research in particle physics as part of a ``remote REU.'' This poster will describe that experience and the results of my project which was to experimentally verify the mass ranges of the Z' boson. Data from the LHC's Atlas detector was filtered by computers to select for likely Z boson decays; my work was in noting all instances of Z or Z' boson decays in one thousand events and their masses, separating the Z from Z' bosons, and generating histograms of the masses.
Precise QCD Predictions for the Production of a Z Boson in Association with a Hadronic Jet.
Gehrmann-De Ridder, A; Gehrmann, T; Glover, E W N; Huss, A; Morgan, T A
2016-07-08
We compute the cross section and differential distributions for the production of a Z boson in association with a hadronic jet to next-to-next-to-leading order (NNLO) in perturbative QCD, including the leptonic decay of the Z boson. We present numerical results for the transverse momentum and rapidity distributions of both the Z boson and the associated jet at the LHC. We find that the NNLO corrections increase the NLO predictions by approximately 1% and significantly reduce the scale variation uncertainty.
A study of multi-jet production in association with an electroweak vector boson
Frederix, R.; Frixione, S.; Papaefstathiou, A.; ...
2016-02-19
Here, we consider the production of a single Z or W boson in association with jets at the LHC. We compute the corresponding cross sections by matching NLO QCD predictions with the Herwig++ and Pythia8 parton showers, and by merging all of the underlying matrix elements with up to two light partons at the Born level. We compare our results with several 7-TeV measurements by the ATLAS and CMS collaborations, and overall we find a good agreement between theory and data.
Tesla: An application for real-time data analysis in High Energy Physics
NASA Astrophysics Data System (ADS)
Aaij, R.; Amato, S.; Anderlini, L.; Benson, S.; Cattaneo, M.; Clemencic, M.; Couturier, B.; Frank, M.; Gligorov, V. V.; Head, T.; Jones, C.; Komarov, I.; Lupton, O.; Matev, R.; Raven, G.; Sciascia, B.; Skwarnicki, T.; Spradlin, P.; Stahl, S.; Storaci, B.; Vesterinen, M.
2016-11-01
Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.
Occupational Risk Factors of Lymphohematopoietic Cancer in Rayong Province, Thailand.
Punjindasup, Apinya; Sangrajrang, Suleeporn; Ekpanyaskul, Chatchai
2015-11-01
The Lymphohematopoietic Cancer (LHC) incidence rate in Thailand has been rising over the past decade with unknown etiology, including Rayong province. One hypothesis of LHC risks is exposure to occupational carcinogens. To determine the association of occupational exposure and LHC risks in Rayong province, Thailand. This matched hospital-based case-control study was conducted in a Rayong provincial hospital from September 2009 to January 2013. One LHC case was matched with four controls in gender and age, ±5 years. Demographic data, residential factors, behavioral factors, and occupational exposure-including chemical exposure-were obtained by interviews and collected by occupational health care officers. The risk factor was analyzed by conditional logistic regression and reported in odds ratio with 95% confidence interval. This study found 105 LHC cases which met the inclusion criteria and were included in the study, yielding a 66% cover rate of cases reported in the database. The histology of LHC were 51 leukemia cases (47.7%), 43 lymphoma cases (42.0%), and 11 multiple myeloma cases (10.3%). The results revealed that occupational exposure to pesticide and smoke were statistically significantly associated with LHC with adjusted ORs 2.26 (95% CI 1.30-3.91) and 1.99 (95% CI = 1.13-3.51), respectively. When stratified to histological subtype of LHC by WHO 2000, leukemia was statistically significantly associated with occupational exposure to smoke, adjusted ORs 2.43 (95% CI 1.11-5.36), with occupational pesticide exposure a significant risk of lymphoma, adjusted ORs 4.69 (95% CI 2.01-10.96). However, neither fumes, wood dust, working outdoors, cleaners, contact with animals, petroleum products and chlorine; nor occupational exposure to volatile organic compounds (VOCs) such as benzene or organic solvents, were statistically significant risk factors of LHC. In addition, there were no significant risks in the demographic data, residential factors, and behavioral factors. Occupational exposure to pesticides and smoke were important occupational risks in developing LHC in Rayong province. However, the ability or power to detect this problem due to the small sample size and recall bias from the study design could not be excluded.
Lee, A I; Thornber, J P
1995-01-01
The carotenoid zeaxanthin has been implicated in a nonradiative dissipation of excess excitation energy. To determine its site of action, we have examined the location of zeaxanthin within the thylakoid membrane components. Five pigment-protein complexes were isolated with little loss of pigments: photosystem I (PSI); core complex (CC) I, the core of PSI; CC II, the core of photosystem II (PSII); light-harvesting complex (LHC) IIb, a trimer of the major light-harvesting protein of PSII; and LHC IIa, c, and d, a complex of the monomeric minor light-harvesting proteins of PSII. Zeaxanthin was found predominantly in the LHC complexes. Lesser amounts were present in the CCs possibly because these contained some extraneous LHC polypeptides. The LHC IIb trimer and the monomeric LHC II a, c, and d pigment-proteins from dark-adapted plants each contained, in addition to lutein and neoxanthin, one violaxanthin molecule but little antheraxanthin and no zeaxanthin. Following illumination, each complex had a reduced violaxanthin content, but now more antheraxanthin and zeaxanthin were present. PSI had little or no neoxanthin. The pigment content of LHC I was deduced by subtracting the pigment content of CC I from that of PSI. Our best estimate for the carotenoid content of a LHC IIb trimer from dark-adapted plants is one violaxanthin, two neoxanthins, six luteins, and 0.03 mol of antheraxanthin per mol trimer. The xanthophyll cycle occurs mainly or exclusively within the light-harvesting antennae of both photosystems. PMID:7724673
Muon Physics at Run-I and its upgrade plan
NASA Astrophysics Data System (ADS)
Benekos, Nektarios Chr.
2015-05-01
The Large Hadron Collider (LHC) and its multi-purpose Detector, ATLAS, has been operated successfully at record centre-of-mass energies of 7 and TeV. After this successful LHC Run-1, plans are actively advancing for a series of upgrades, culminating roughly 10 years from now in the high luminosity LHC (HL-LHC) project, delivering of order five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the data set from about few hundred fb-1 expected for LHC running to 3000 fb-1 by around 2030. To cope with the corresponding rate increase, the ATLAS detector needs to be upgraded. The upgrade will proceed in two steps: Phase I in the LHC shutdown 2018/19 and Phase II in 2023-25. The largest of the ATLAS Phase-1 upgrades concerns the replacement of the first muon station of the highrapidity region, the so called New Small Wheel. This configuration copes with the highest rates expected in Phase II and considerably enhances the performance of the forward muon system by adding triggering functionality to the first muon station. Prospects for the ongoing and future data taking are presented. This article presents the main muon physics results from LHC Run-1 based on a total luminosity of 30 fb^-1. Prospects for the ongoing and future data taking are also presented. We will conclude with an update of the status of the project and the steps towards a complete operational system, ready to be installed in ATLAS in 2018/19.
Detector Developments for the High Luminosity LHC Era (4/4)
Bortoletto, Daniela
2018-02-09
Tracking Detectors - Part II. Calorimetry, muon detection, vertexing, and tracking will play a central role in determining the physics reach for the High Luminosity LHC Era. In these lectures we will cover the requirements, options, and the R&D; efforts necessary to upgrade the current LHC detectors and enabling discoveries.
Detector Developments for the High Luminosity LHC Era (3/4)
Bortoletto, Daniela
2018-01-23
Tracking Detectors - Part I. Calorimetry, muon detection, vertexing, and tracking will play a central role in determining the physics reach for the High Luminosity LHC Era. In these lectures we will cover the requirements, options, and the R&D; efforts necessary to upgrade the current LHC detectors and enabling discoveries.
High Luminosity LHC: Challenges and plans
Arduini, G.; Barranco, J.; Bertarelli, A.; ...
2016-12-28
The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will undergo a major upgrade in the 2020s. This will increase its rate of collisions by a factor of five beyond the original design value and the integrated luminosity by a factor ten. The new configuration, known as High Luminosity LHC (HL-LHC), willmore » rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11–12 T superconducting magnets, including Nb 3Sn-based magnets never used in accelerators before, compact superconducting cavities for longitudinal beam rotation, new technology and physical processes for beam collimation. As a result, the dynamics of the HL-LHC beams will be also particularly challenging and this aspect is the main focus of this paper.« less
Mechanical Design of the LHC Standard Half-Cell
NASA Astrophysics Data System (ADS)
Poncet, A.; Brunet, J. C.; Cruikshank, P.; Genet, M.; Parma, V.; Rohmig, P.; Saban, R.; Tavian, L.; Veness, R.; Vlogaert, J.; Williams, L. R.
1997-05-01
The LHC Conceptual Design Report issued on 20th October 1995 (CERN/AC/95-05 (LHC) - nicknamed "Yellow Book") introduced significant changes to some fundamental features of the LHC standard half-cell, composed of one quadrupole, 3 dipoles and a set of corrector magnets. A separate cryogenic distribution line was introduced, which was previously inside the main cryostat. The dipole length has been increased from 10 to 15 m and independent powering of the focusing and defocusing quadrupole magnets was chosen. Individual quench protection diodes were introduced in magnets interconnects and many auxiliary bus bars were added to feed in series the various families of correcting superconducting magnets. The various highly intricate basic systems such as: cryostats and cryogenics feeders, superconducting magnets and their electrical feeding and protection, vacuum beam screen and its cooling, support and alignment devices have been redesigned, taking into account the very tight space available. These space constraints are given by the necessity to have maximum integral bending field strength for maximum LHC energy, and the existing LHC tunnel. Finally, cryogenic and vacuum sectorisation have been introduced to reduce downtimes and facilitate commissioning.
Toivanen, V; Bellodi, G; Dimov, V; Küchler, D; Lombardi, A M; Maintrot, M
2016-02-01
Linac3 is the first accelerator in the heavy ion injector chain of the Large Hadron Collider (LHC), providing multiply charged heavy ion beams for the CERN experimental program. The ion beams are produced with GTS-LHC, a 14.5 GHz electron cyclotron resonance ion source, operated in afterglow mode. Improvement of the GTS-LHC beam formation and beam transport along Linac3 is part of the upgrade program of the injector chain in preparation for the future high luminosity LHC. A mismatch between the ion beam properties in the ion source extraction region and the acceptance of the following Low Energy Beam Transport (LEBT) section has been identified as one of the factors limiting the Linac3 performance. The installation of a new focusing element, an einzel lens, into the GTS-LHC extraction region is foreseen as a part of the Linac3 upgrade, as well as a redesign of the first section of the LEBT. Details of the upgrade and results of a beam dynamics study of the extraction region and LEBT modifications will be presented.
LHC benchmark scenarios for the real Higgs singlet extension of the standard model
Robens, Tania; Stefaniak, Tim
2016-05-13
Here, we present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they ful ll all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low mass and high mass region, i.e. the mass range where the additional Higgsmore » state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group.« less
On the LHC sensitivity for non-thermalised hidden sectors
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix
2018-04-01
We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.
Lead ions and Coulomb’s Law at the LHC (CERN)
NASA Astrophysics Data System (ADS)
Cid-Vidal, Xabier; Cid, Ramon
2018-03-01
Although for most of the time the Large Hadron Collider (LHC) at CERN collides protons, for around one month every year lead ions are collided, to expand the diversity of the LHC research programme. Furthermore, in an effort not originally foreseen, proton-lead collisions are also taking place, with results of high interest to the physics community. All the large experiments of the LHC have now joined the heavy-ion programme, including the LHCb experiment, which was not at first expected to be part of it. The aim of this article is to introduce a few simple physical calculations relating to some electrical phenomena that occur when lead-ion bunches are running in the LHC, using Coulomb’s Law, to be taken to the secondary school classroom to help students understand some important physical concepts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Mateus B; Smolensky, Dmitriy; Heller, William T
2009-01-01
The structure of spinach light-harvesting complex II (LHC II), stabilized in a solution of the detergent n-octyl-{beta}-d-glucoside (BOG), was investigated by small-angle neutron scattering (SANS). Physicochemical characterization of the isolated complex indicated that it was pure (>95%) and also in its native trimeric state. SANS with contrast variation was used to investigate the properties of the protein-detergent complex at three different H{sub 2}O/D{sub 2}O contrast match points, enabling the scattering properties of the protein and detergent to be investigated independently. The topological shape of LHC II, determined using ab initio shape restoration methods from the SANS data at the contrastmore » match point of BOG, was consistent with the X-ray crystallographic structure of LHC II (Liu et al. Nature 2004 428, 287-292). The interactions of the protein and detergent were investigated at the contrast match point for the protein and also in 100% D{sub 2}O. The data suggested that BOG micelle structure was altered by its interaction with LHC II, but large aggregate structures were not formed. Indirect Fourier transform analysis of the LHC II/BOG scattering curves showed that the increase in the maximum dimension of the protein-detergent complex was consistent with the presence of a monolayer of detergent surrounding the protein. A model of the LHC II/BOG complex was generated to interpret the measurements made in 100% D{sub 2}O. This model adequately reproduced the overall size of the LHC II/BOG complex, but demonstrated that the detergent does not have a highly regular shape that surrounds the hydrophobic periphery of LHC II. In addition to demonstrating that natively structured LHC II can be produced for functional characterization and for use in artificial solar energy applications, the analysis and modeling approaches described here can be used for characterizing detergent-associated {alpha}-helical transmembrane proteins.« less
Cleaning Insertions and Collimation Challenges
NASA Astrophysics Data System (ADS)
Redaelli, S.; Appleby, R. B.; Bertarelli, A.; Bruce, R.; Jowett, J. M.; Lechner, A.; Losito, R.
High-performance collimation systems are essential for operating efficiently modern hadron machine with large beam intensities. In particular, at the LHC the collimation system ensures a clean disposal of beam halos in the superconducting environment. The challenges of the HL-LHC study pose various demanding requests for beam collimation. In this paper we review the present collimation system and its performance during the LHC Run 1 in 2010-2013. Various collimation solutions under study to address the HL-LHC requirements are then reviewed, identifying the main upgrade baseline and pointing out advanced collimation concept for further enhancement of the performance.
Design, production and first commissioning results of the electrical feedboxes of the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perin, A.; Atieh, S.; Benda, V.
2007-12-01
A total of 44 CERN designed cryogenic electrical feedboxes are needed to power the LHC superconducting magnets. The feedboxes include more than 1000 superconducting circuits fed by high temperature superconductor and conventional current leads ranging from 120 A to 13 kA. In addition to providing the electrical current to the superconducting circuits, they also ensure specific mechanical and cryogenic functions for the LHC. The paper focuses on the main design aspects and related production operations and gives an overview of specific technologies employed. Results of the commissioning of the feedboxes of the first LHC sectors are presented.
Challenges and Plans for Injection and Beam Dump
NASA Astrophysics Data System (ADS)
Barnes, M.; Goddard, B.; Mertens, V.; Uythoven, J.
The injection and beam dumping systems of the LHC will need to be upgraded to comply with the requirements of operation with the HL-LHC beams. The elements of the injection system concerned are the fixed and movable absorbers which protect the LHC in case of an injection kicker error and the injection kickers themselves. The beam dumping system elements under study are the absorbers which protect the aperture in case of an asynchronous beam dump and the beam absorber block. The operational limits of these elements and the new developments in the context of the HL-LHC project are described.
Patently Absurd: The Ethical Implications of Software Patents
ERIC Educational Resources Information Center
Stark, Chris D.
2005-01-01
Since the mid-1980s, the percentage of the population in the United States owning a personal computer has grown from just over 8% to well over 50%, and nearly 60% of the population uses a computer at work. At the end of 2004, there were over 820 million personal computers in active use worldwide, and projections indicate that the number will…
Handbook of LHC Higgs Cross Sections: 4. Deciphering the Nature of the Higgs Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Florian, D.
This Report summarizes the results of the activities of the LHC Higgs Cross Section Working Group in the period 2014-2016. The main goal of the working group was to present the state-of-the-art of Higgs physics at the LHC, integrating all new results that have appeared in the last few years. The first part compiles the most up-to-date predictions of Higgs boson production cross sections and decay branching ratios, parton distribution functions, and off-shell Higgs boson production and interference effects. The second part discusses the recent progress in Higgs effective field theory predictions, followed by the third part on pseudo-observables, simplifiedmore » template cross section and fiducial cross section measurements, which give the baseline framework for Higgs boson property measurements. The fourth part deals with the beyond the Standard Model predictions of various benchmark scenarios of Minimal Supersymmetric Standard Model, extended scalar sector, Next-to-Minimal Supersymmetric Standard Model and exotic Higgs boson decays. This report follows three previous working-group reports: Handbook of LHC Higgs Cross Sections: 1. Inclusive Observables (CERN-2011-002), Handbook of LHC Higgs Cross Sections: 2. Differential Distributions (CERN-2012-002), and Handbook of LHC Higgs Cross Sections: 3. Higgs properties (CERN-2013-004). The current report serves as the baseline reference for Higgs physics in LHC Run 2 and beyond.« less
Introducing the LHC in the Classroom: An Overview of Education Resources Available
ERIC Educational Resources Information Center
Wiener, Gerfried J.; Woithe, Julia; Brown, Alexander; Jende, Konrad
2016-01-01
In the context of the recent re-start of CERN's Large Hadron Collider (LHC) and the challenge presented by unidentified falling objects (UFOs), we seek to facilitate the introduction of high energy physics in the classroom. Therefore, this paper provides an overview of the LHC and its operation, highlighting existing education resources, and…
Lead Ions and Coulomb's Law at the LHC (CERN)
ERIC Educational Resources Information Center
Cid-Vidal, Xabier; Cid, Ramon
2018-01-01
Although for most of the time the Large Hadron Collider (LHC) at CERN collides protons, for around one month every year lead ions are collided, to expand the diversity of the LHC research programme. Furthermore, in an effort not originally foreseen, proton-lead collisions are also taking place, with results of high interest to the physics…
Searching for supersymmetry at the LHC: Studies of sleptons and stops
NASA Astrophysics Data System (ADS)
Eckel, Jonathan Daniel
Searches of supersymmetry at the LHC have put stringent constraints on the strong production of squarks and gluinos. Current results exclude colored particles with masses up to roughly 1 TeV. To fully explore the discovery potential of the LHC, we study the challenging signals that are hidden by Standard Model backgrounds but with masses accessible by the LHC. These particles include the sleptons with a weak production cross section, and stops that are hidden by large top-antitop backgrounds. In this dissertation, I explore the collider phenomenology of sleptons and stops at the LHC. Sleptons can be produced at the LHC either through cascade decay or via Drell-Yan pair production. For the cascade decay, we studied neutralino-chargino associated production, with the subsequent decay through on shell sleptons resulting in a trilepton plus missing transverse energy signal. The invariant mass from the neutralino decay has a distinctive triangle shape with a sharp kinematic cutoff. We utilized this feature and obtained the effective cross section that is needed for a 5-sigma discovery of sleptons. We apply these results to the MSSM and find a discovery reach for left-handed sleptons which extends beyond the reach expected in usual Drell-Yan studies. Slepton pair production searches on the other hand, have limited reach at the LHC. The slepton decay branching fractions, however, depend on the composition of the lightest supersymmetric particle (LSP). We extend the experimental analysis for data collected thus far to include different scenarios for the composition of the LSP. We find that the LHC slepton reach is enhanced up to a factor of 2 for a non-Bino-LSP. We present the 95% C.L. exclusion limits and 5-sigma discovery reach for sleptons at the 8 and 14 TeV LHC considering Bino-, Wino-, or Higgsino-like LSPs. Current stop searches at the LHC focus on signals with top-antitop plus missing transverse energy. However, in many regions of SUSY parameter space, these decay modes are not dominant, leading to weakened experimental limits on stops. We identify stop decays that can have significant branching fractions to new final states resulting in new signal channels to observe. We investigate stop pair production by considering the channel of stop to top-higgs-LSP and stop to bottom-W-LSP leading to a signal of 4 b-jets, 2 jets, 1 lepton and missing transverse energy. We present the 95% C.L. exclusion limits and 5-sigma discovery reach for stops at the 14 TeV LHC.
Król, M; Spangfort, M D; Huner, N P; Oquist, G; Gustafsson, P; Jansson, S
1995-01-01
Monospecific polyclonal antibodies have been raised against synthetic peptides derived from the primary sequences from different plant light-harvesting Chl a/b-binding (LHC) proteins. Together with other monospecific antibodies, these were used to quantify the levels of the 10 different LHC proteins in wild-type and chlorina f2 barley (Hordeum vulgare L.), grown under normal and intermittent light (ImL). Chlorina f2, grown under normal light, lacked Lhcb1 (type I LHC II) and Lhcb6 (CP24) and had reduced amounts of Lhcb2, Lhcb3 (types II and III LHC II), and Lhcb4 (CP 29). Chlorina f2 grown under ImL lacked all LHC proteins, whereas wild-type ImL plants contained Lhcb5 (CP 26) and a small amount of Lhcb2. The chlorina f2 ImL thylakoids were organized in large parallel arrays, but wild-type ImL thylakoids had appressed regions, indicating a possible role for Lhcb5 in grana stacking. Chlorina f2 grown under ImL contained considerable amounts of violaxanthin (2-3/reaction center), representing a pool of phototransformable xanthophyll cycle pigments not associated with LHC proteins. Chlorina f2 and the plants grown under ImL also contained early light-induced proteins (ELIPs) as monitored by western blotting. The levels of both ELIPs and xanthophyll cycle pigments increased during a 1 h of high light treatment, without accumulation of LHC proteins. These data are consistent with the hypothesis that ELIPs are pigment-binding proteins, and we suggest that ELIPs bind photoconvertible xanthophylls and replace "normal" LHC proteins under conditions of light stress. PMID:7748263
Monitoring of computing resource use of active software releases at ATLAS
NASA Astrophysics Data System (ADS)
Limosani, Antonio; ATLAS Collaboration
2017-10-01
The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.
The PCIe-based readout system for the LHCb experiment
NASA Astrophysics Data System (ADS)
Cachemiche, J. P.; Duval, P. Y.; Hachon, F.; Le Gac, R.; Réthoré, F.
2016-02-01
The LHCb experiment is designed to study differences between particles and anti-particles as well as very rare decays in the beauty and charm sector at the LHC. The detector will be upgraded in 2019 in order to significantly increase its efficiency, by removing the first-level hardware trigger. The upgrade experiment will implement a trigger-less readout system in which all the data from every LHC bunch-crossing are transported to the computing farm over 12000 optical links without hardware filtering. The event building and event selection are carried out entirely in the farm. Another original feature of the system is that data transmitted through these fibres arrive directly to computers through a specially designed PCIe card called PCIe40. The same board handles the data acquisition flow and the distribution of fast and slow controls to the detector front-end electronics. It embeds one of the most powerful FPGAs currently available on the market with 1.2 million logic cells. The board has a bandwidth of 480 Gbits/s in both input and output over optical links and 100 Gbits/s over the PCI Express bus to the CPU. We will present how data circulate through the board and in the PC server for achieving the event building. We will focus on specific issues regarding the design of such a board with a very large FPGA, in particular in terms of power supply dimensioning and thermal simulations. The features of the board will be detailed and we will finally present the first performance measurements.
Probing the Higgs self coupling via single Higgs production at the LHC
Degrassi, G.; Giardino, P. P.; Maltoni, F.; ...
2016-12-16
Here, we propose a method to determine the trilinear Higgs self coupling that is alternative to the direct measurement of Higgs pair production total cross sections and differential distributions. Furthermore, the method relies on the effects that electroweak loops featuring an anomalous trilinear coupling would imprint on single Higgs production at the LHC. We first calculate these contributions to all the phenomenologically relevant Higgs production (ggF, VBF, WH, ZH, tmore » $$\\bar{t}$$ ) and decay (γγ,WW*/ZZ*→ 4f, b$$\\bar{b}$$,ττ) modes at the LHC and then estimate the sensitivity to the trilinear coupling via a one-parameter fit to the single Higgs measurements at the LHC 8 TeV. We also found that the bounds on the self coupling are already competitive with those from Higgs pair production and will be further improved in the current and next LHC runs.« less
Design approach for the development of a cryomodule for compact crab cavities for Hi-Lumi LHC
NASA Astrophysics Data System (ADS)
Pattalwar, Shrikant; Jones, Thomas; Templeton, Niklas; Goudket, Philippe; McIntosh, Peter; Wheelhouse, Alan; Burt, Graeme; Hall, Ben; Wright, Loren; Peterson, Tom
2014-01-01
A prototype Superconducting RF (SRF) cryomodule, comprising multiple compact crab cavities is foreseen to realise a local crab crossing scheme for the "Hi-Lumi LHC", a project launched by CERN to increase the luminosity performance of LHC. A cryomodule with two cavities will be initially installed and tested on the SPS drive accelerator at CERN to evaluate performance with high-intensity proton beams. A series of boundary conditions influence the design of the cryomodule prototype, arising from; the complexity of the cavity design, the requirement for multiple RF couplers, the close proximity to the second LHC beam pipe and the tight space constraints in the SPS and LHC tunnels. As a result, the design of the helium vessel and the cryomodule has become extremely challenging. This paper assesses some of the critical cryogenic and engineering design requirements and describes an optimised cryomodule solution for the evaluation tests on SPS.
Boldt, Lynda; Yellowlees, David; Leggat, William
2012-01-01
The superfamily of light-harvesting complex (LHC) proteins is comprised of proteins with diverse functions in light-harvesting and photoprotection. LHC proteins bind chlorophyll (Chl) and carotenoids and include a family of LHCs that bind Chl a and c. Dinophytes (dinoflagellates) are predominantly Chl c binding algal taxa, bind peridinin or fucoxanthin as the primary carotenoid, and can possess a number of LHC subfamilies. Here we report 11 LHC sequences for the chlorophyll a-chlorophyll c 2-peridinin protein complex (acpPC) subfamily isolated from Symbiodinium sp. C3, an ecologically important peridinin binding dinoflagellate taxa. Phylogenetic analysis of these proteins suggests the acpPC subfamily forms at least three clades within the Chl a/c binding LHC family; Clade 1 clusters with rhodophyte, cryptophyte and peridinin binding dinoflagellate sequences, Clade 2 with peridinin binding dinoflagellate sequences only and Clades 3 with heterokontophytes, fucoxanthin and peridinin binding dinoflagellate sequences. PMID:23112815
Physics opportunities with a fixed target experiment at the LHC (AFTER@LHC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadjidakis, Cynthia; Anselmino, Mauro; Arnaldi, R.
By extracting the beam with a bent crystal or by using an internal gas target, the multi-TeV proton and lead LHC beams allow one to perform the most energetic fixed-target experiments (AFTER@LHC) and to study p+p and p+A collisions at \\sqrt{s_NN}=115 GeV and Pb+p and Pb+A collisions at \\sqrt{s_NN}=72 GeV. Such studies would address open questions in the domain of the nucleon and nucleus partonic structure at high-x, quark-gluon plasma and, by using longitudinally or transversally polarised targets, spin physics. In this paper, we discuss the physics opportunities of a fixed-target experiment at the LHC and we report on themore » possible technical implementations of a high-luminosity experiment. We finally present feasibility studies for Drell-Yan, open heavy-flavour and quarkonium production, with an emphasis on high-x and spin physics.« less
Cloud access to interoperable IVOA-compliant VOSpace storage
NASA Astrophysics Data System (ADS)
Bertocco, S.; Dowler, P.; Gaudet, S.; Major, B.; Pasian, F.; Taffoni, G.
2018-07-01
Handling, processing and archiving the huge amount of data produced by the new generation of experiments and instruments in Astronomy and Astrophysics are among the more exciting challenges to address in designing the future data management infrastructures and computing services. We investigated the feasibility of a data management and computation infrastructure, available world-wide, with the aim of merging the FAIR data management provided by IVOA standards with the efficiency and reliability of a cloud approach. Our work involved the Canadian Advanced Network for Astronomy Research (CANFAR) infrastructure and the European EGI federated cloud (EFC). We designed and deployed a pilot data management and computation infrastructure that provides IVOA-compliant VOSpace storage resources and wide access to interoperable federated clouds. In this paper, we detail the main user requirements covered, the technical choices and the implemented solutions and we describe the resulting Hybrid cloud Worldwide infrastructure, its benefits and limitations.
LHC collider phenomenology of minimal universal extra dimensions
NASA Astrophysics Data System (ADS)
Beuria, Jyotiranjan; Datta, AseshKrishna; Debnath, Dipsikha; Matchev, Konstantin T.
2018-05-01
We discuss the collider phenomenology of the model of Minimal Universal Extra Dimensions (MUED) at the Large hadron Collider (LHC). We derive analytical results for all relevant strong pair-production processes of two level 1 Kaluza-Klein partners and use them to validate and correct the existing MUED implementation in the fortran version of the PYTHIA event generator. We also develop a new implementation of the model in the C++ version of PYTHIA. We use our implementations in conjunction with the CHECKMATE package to derive the LHC bounds on MUED from a large number of published experimental analyses from Run 1 at the LHC.
What kind of sQGP is the matter created at RHIC and LHC?
NASA Astrophysics Data System (ADS)
Liao, Jinfeng
2011-10-01
One of the main discoveries at RHIC is the so-called ``perfect fluid,'' and one of the most interesting things to see at LHC is whether and how such ``perfect fluid'' property will change at much higher collisional energies. I argue these will provide unique opportunity to answer theoretical question about the nature of sQGP. I will discuss two very different scenarios for the QGP in the temperature range from RHIC to LHC: (1) sQGP as a ``see-saw''-QGP of its electric and magnetic components, which is inspired by the deep and generic Electric-Magnetic duality in field theories; (2) sQGP as a super-strong-QGP, which may have a holographic dual in one form or another due to the strong coupling. The two scenarios predict different medium properties (viscosity, and opacity to hard probes) with increasing temperature from RHIC to LHC, therefore making them distinguishable at the upcoming LHC top energy PbPb collisions. The first hints of a possible change in created matter's structure at LHC 2.76TeV collisions as well as expectations for 5.5TeV collisions will be discussed. Supported under DOE Contract No. DE-AC02-98CH10886.
Crystal structure of plant light-harvesting complex shows the active, energy-transmitting state
Barros, Tiago; Royant, Antoine; Standfuss, Jörg; Dreuw, Andreas; Kühlbrandt, Werner
2009-01-01
Plants dissipate excess excitation energy as heat by non-photochemical quenching (NPQ). NPQ has been thought to resemble in vitro aggregation quenching of the major antenna complex, light harvesting complex of photosystem II (LHC-II). Both processes are widely believed to involve a conformational change that creates a quenching centre of two neighbouring pigments within the complex. Using recombinant LHC-II lacking the pigments implicated in quenching, we show that they have no particular role. Single crystals of LHC-II emit strong, orientation-dependent fluorescence with an emission maximum at 680 nm. The average lifetime of the main 680 nm crystal emission at 100 K is 1.31 ns, but only 0.39 ns for LHC-II aggregates under identical conditions. The strong emission and comparatively long fluorescence lifetimes of single LHC-II crystals indicate that the complex is unquenched, and that therefore the crystal structure shows the active, energy-transmitting state of LHC-II. We conclude that quenching of excitation energy in the light-harvesting antenna is due to the molecular interaction with external pigments in vitro or other pigment–protein complexes such as PsbS in vivo, and does not require a conformational change within the complex. PMID:19131972
Upgrade of the beam extraction system of the GTS-LHC electron cyclotron resonance ion source at CERN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toivanen, V., E-mail: ville.aleksi.toivanen@cern.ch; Bellodi, G.; Dimov, V.
2016-02-15
Linac3 is the first accelerator in the heavy ion injector chain of the Large Hadron Collider (LHC), providing multiply charged heavy ion beams for the CERN experimental program. The ion beams are produced with GTS-LHC, a 14.5 GHz electron cyclotron resonance ion source, operated in afterglow mode. Improvement of the GTS-LHC beam formation and beam transport along Linac3 is part of the upgrade program of the injector chain in preparation for the future high luminosity LHC. A mismatch between the ion beam properties in the ion source extraction region and the acceptance of the following Low Energy Beam Transport (LEBT)more » section has been identified as one of the factors limiting the Linac3 performance. The installation of a new focusing element, an einzel lens, into the GTS-LHC extraction region is foreseen as a part of the Linac3 upgrade, as well as a redesign of the first section of the LEBT. Details of the upgrade and results of a beam dynamics study of the extraction region and LEBT modifications will be presented.« less
1980-01-01
A highly purified chlorophyll a/b light-harvesting complex (chl a/b LHC; chl a/b ratio 1.2) was obtained from Triton-solubilized chloroplast membranes of pea and barley according to the method of Burke et al. (1978, Arch. Biochem. Biophys. 187: 252--263). Gel electrophoresis of the cation-precipitated chl a/b LHC from peas reveals the presence of four polypeptides in the 23- to 28-kdalton size range. Three of these peptides appear to be identical to those derived from re-electrophoresed CPII and CPII* bands. In freeze-fracture replicas, the cation-precipitated chl a/b LHC appears as a semicrystalline aggregate of membranous sheets containing closely spaced granules. Upon removal of the cations by dialysis, the aggregates break up into their constituent membranous sheets without changing their granular substructure. These membranous sheets can be resolubilized in 1.5% Triton X-100, and the chl a/b LHC particles then reconstituted into soybean lecithin liposomes. Freeze-fracture micrographs of the reconstituted chl a/b LHC vesicles suspended in a low salt medium reveal randomly dispersed approximately 80-A particles on both concave and convex fracture faces as well as some crystalline particle arrays, presumably resulting from incompletely solubilized fragments of the membranous sheets. Based on the approximately 80-A diameter of the particles, and on the assumption that one freeze- fracture particle represents the structural unit of one chl a/b LHC aggregate, a theoretical mol wt of approximately 200 kdalton has been calculated for the chl a/b LHC. Deep-etching and negative-staining techniques reveal that the chl a/b LHC particles are also exposed on the surface of the bilayer membranes. Addition of greater than or equal to 2 mM MgCl2 or greater than or equal to 60 mM NaCl to the reconstituted vesicles leads to their aggregation and, with divalent cations, to the formation of extensive membrane stacks. At the same time, the chl a/b LHC particles become clustered into the adhering membrane regions. Under these conditions the particles in adjacent membranes usually become precisely aligned. Evidence is presented to aupport the hypothesis that adhesion between the chl a/b LHC particles is mediated by hydrophobic interactions, and that the cations are needed to neutralize surface charges on the particles. PMID:7350170
Bonente, Giulia; Ballottari, Matteo; Truong, Thuy B.; Morosinotto, Tomas; Ahn, Tae K.; Fleming, Graham R.; Niyogi, Krishna K.; Bassi, Roberto
2011-01-01
In photosynthetic organisms, feedback dissipation of excess absorbed light energy balances harvesting of light with metabolic energy consumption. This mechanism prevents photodamage caused by reactive oxygen species produced by the reaction of chlorophyll (Chl) triplet states with O2. Plants have been found to perform the heat dissipation in specific proteins, binding Chls and carotenoids (Cars), that belong to the Lhc family, while triggering of the process is performed by the PsbS subunit, needed for lumenal pH detection. PsbS is not found in algae, suggesting important differences in energy-dependent quenching (qE) machinery. Consistent with this suggestion, a different Lhc-like gene product, called LhcSR3 (formerly known as LI818) has been found to be essential for qE in Chlamydomonas reinhardtii. In this work, we report the production of two recombinant LhcSR isoforms from C. reinhardtii and their biochemical and spectroscopic characterization. We found the following: (i) LhcSR isoforms are Chl a/b– and xanthophyll-binding proteins, contrary to higher plant PsbS; (ii) the LhcSR3 isoform, accumulating in high light, is a strong quencher of Chl excited states, exhibiting a very fast fluorescence decay, with lifetimes below 100 ps, capable of dissipating excitation energy from neighbor antenna proteins; (iii) the LhcSR3 isoform is highly active in the transient formation of Car radical cation, a species proposed to act as a quencher in the heat dissipation process. Remarkably, the radical cation signal is detected at wavelengths corresponding to the Car lutein, rather than to zeaxanthin, implying that the latter, predominant in plants, is not essential; (iv) LhcSR3 is responsive to low pH, the trigger of non-photochemical quenching, since it binds the non-photochemical quenching inhibitor dicyclohexylcarbodiimide, and increases its energy dissipation properties upon acidification. This is the first report of an isolated Lhc protein constitutively active in energy dissipation in its purified form, opening the way to detailed molecular analysis. Owing to its protonatable residues and constitutive excitation energy dissipation, this protein appears to merge both pH-sensing and energy-quenching functions, accomplished respectively by PsbS and monomeric Lhcb proteins in plants. PMID:21267060
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
First LHCb measurement with data from the LHC Run 2
NASA Astrophysics Data System (ADS)
Anderlini, L.; Amerio, S.
2017-01-01
LHCb has recently introduced a novel real-time detector alignment and calibration strategy for the Run 2. Data collected at the start of each LHC fill are processed in few minutes and used to update the alignment. On the other hand, the calibration constants will be evaluated for each run of data taking. An increase in the CPU and disk capacity of the event filter farm, combined with improvements to the reconstruction software, allow for efficient, exclusive selections already in the first stage of the High Level Trigger (HLT1), while the second stage, HLT2, performs complete, offline-quality, event reconstruction. In Run 2, LHCb will collect the largest data sample of charm mesons ever recorded. Novel data processing and analysis techniques are required to maximise the physics potential of this data sample with the available computing resources, taking into account data preservation constraints. In this write-up, we describe the full analysis chain used to obtain important results analysing the data collected in proton-proton collisions in 2015, such as the J/ψ and open charm production cross-sections, and consider the further steps required to obtain real-time results after the LHCb upgrade.
NASA Astrophysics Data System (ADS)
Balcas, J.; Hendricks, T. W.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
The SDN Next Generation Integrated Architecture (SDN-NGeNIA) project addresses some of the key challenges facing the present and next generations of science programs in HEP, astrophysics, and other fields, whose potential discoveries depend on their ability to distribute, process and analyze globally distributed Petascale to Exascale datasets. The SDN-NGenIA system under development by Caltech and partner HEP and network teams is focused on the coordinated use of network, computing and storage infrastructures, through a set of developments that build on the experience gained in recently completed and previous projects that use dynamic circuits with bandwidth guarantees to support major network flows, as demonstrated across LHC Open Network Environment [1] and in large scale demonstrations over the last three years, and recently integrated with PhEDEx and Asynchronous Stage Out data management applications of the CMS experiment at the Large Hadron Collider. In addition to the general program goals of supporting the network needs of the LHC and other science programs with similar needs, a recent focus is the use of the Leadership HPC facility at Argonne National Lab (ALCF) for data intensive applications.
Mechanical Design Studies of the MQXF Long Model Quadrupole for the HiLumi LHC
Pan, Heng; Anderssen, Eric; Ambrosio, Giorgio; ...
2016-12-20
The Large Hadron Collider Luminosity upgrade (HiLumi) program requires new low-β triplet quadrupole magnets, called MQXF, in the Interaction Region (IR) to increase the LHC peak and integrated luminosity. The MQXF magnets, designed and fabricated in collaboration between CERN and the U.S. LARP, will all have the same cross section. The MQXF long model, referred as MQXFA, is a quadrupole using the Nb3Sn superconducting technology with 150 mm aperture and a 4.2 m magnetic length and is the first long prototype of the final MQXF design. The MQXFA magnet is based on the previous LARP HQ and MQXFS designs. Inmore » this paper we present the baseline design of the MQXFA structure with detailed 3D numerical analysis. A detailed tolerance analysis of the baseline case has been performed by using a 3D finite element model, which allows fast computation of structures modelled with actual tolerances. Tolerance sensitivity of each component is discussed to verify the actual tolerances to be achieved by vendors. In conclusion, tolerance stack-up analysis is presented in the end of this paper.« less
ACTS: from ATLAS software towards a common track reconstruction software
NASA Astrophysics Data System (ADS)
Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration
2017-10-01
Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.
Expected Performance of the LHC Synchrotron-Light Telescope (BSRT) and Abort-Gap Monitor (BSRA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Alan; /SLAC
2010-06-07
This Report presents calculations of the synchrotron light from proton and lead-ion beams in the LHC at all energies from 0.45 to 7 TeV. It computes the emission from three sources: the uniform-field region of the D3 dipole, the dipole's edge field, and the short undulator just upstream. Light emitted at or near visible wavelengths is assessed for making optical measurements of transverse beam profiles and for monitoring the emptiness of the abort gap in the fill pattern. There is sufficient light for both applications, although both species pass through energy ranges in the ramp with small photon counts. Effectsmore » limiting image resolution are examined, including geometric optics, depth of field, and diffraction. The Report also considers recent suggestions that the undulator, intended to supplement the dipole for low energies, should not be ramped off at high energies and perhaps should not be used at all. We conclude that the undulator is essential at low energy for both species, but that it is possible to leave the undulator on at the cost of some blurring at intermediate energies.« less
Jennings, Aaron A; Li, Zijian
2015-09-01
Surface soil contamination is a worldwide problem. Many regulatory jurisdictions attempt to control human exposures with regulatory guidance values (RGVs) that specify a soil's maximum allowable concentration. Pesticides are important soil contaminants because of their intentional toxicity and widespread surface soil application. Worldwide, at least 174 regulatory jurisdictions from 54 United Nations member states have published more than 19,400 pesticide RGVs for at least 739 chemically unique pesticides. This manuscript examines the variability of the guidance values that are applied worldwide to the original 2001 Stockholm Convention persistent organic pollutants (POP) pesticides (Aldrin, Chlordane, DDT, Dieldrin, Endrin, Heptachlor, Mirex, and Toxaphene) for which at least 1667 RGVs have been promulgated. Results indicate that the spans of the RGVs applied to each of these pesticides vary from 6.1 orders of magnitude for Toxaphene to 10.0 orders of magnitude for Mirex. The distribution of values across these value spans resembles the distribution of lognormal random variables, but also contain non-random value clusters. Approximately 40% of all the POP RGVs fall within uncertainty bounds computed from the U.S. Environmental Protection Agency (USEPA) RGV cancer risk model. Another 22% of the values fall within uncertainty bounds computed from the USEPA's non-cancer risk model, but the cancer risk calculations yield the binding (lowest) value for all POP pesticides except Endrin. The results presented emphasize the continued need to rationalize the RGVs applied worldwide to important soil contaminants. Copyright © 2015 Elsevier Ltd. All rights reserved.
Designing a Network and Systems Computing Curriculum: The Stakeholders and the Issues
ERIC Educational Resources Information Center
Tan, Grace; Venables, Anne
2010-01-01
Since 2001, there has been a dramatic decline in Information Technology and Computer Science student enrolments worldwide. As a consequence, many institutions have evaluated their offerings and revamped their programs to include units designed to capture students' interests and increase subsequent enrolment. Likewise, at Victoria University the…
Assessing Computer Literacy: A Validated Instrument and Empirical Results.
ERIC Educational Resources Information Center
Gabriel, Roy M.
1985-01-01
Describes development of a comprehensive computer literacy assessment battery for K-12 curriculum based on objectives of a curriculum implemented in the Worldwide Department of Defense Dependents Schools system. Test development and field test data are discussed and a correlational analysis which assists in interpretation of test results is…
ERIC Educational Resources Information Center
Wang, Li
2005-01-01
With the advent of networked computers and Internet technology, computer-based instruction has been widely used in language classrooms throughout the United States. Computer technologies have dramatically changed the way people gather information, conduct research and communicate with others worldwide. Considering the tremendous startup expenses,…
High-Throughput Computing on High-Performance Platforms: A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleynik, D; Panitkin, S; Matteo, Turilli
The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binetruy, Pierre
2009-09-17
Et si la lumière au bout du tunnel du LHC était cosmique ? En d’autres termes, qu’est-ce que le LHC peut nous apporter dans la connaissance de l’Univers ? Car la montée en énergie des accélérateurs de particules nous permet de mieux appréhender l’univers primordial, chaud et dense. Mais dans quel sens dit-on que le LHC reproduit des conditions proches du Big bang ? Quelles informations nous apporte-t-il sur le contenu de l’Univers ? La matière noire est-elle détectable au LHC ? L’énergie noire ? Pourquoi l’antimatière accumulée au CERN est-elle si rare dans l’Univers ? Et si le CERNmore » a bâti sa réputation sur l’exploration des forces faibles et fortes qui opèrent au sein des atomes et de leurs noyaux, est-ce que le LHC peut nous apporter des informations sur la force gravitationnelle qui gouverne l’évolution cosmique ? Depuis une trentaine d’années, notre compréhension de l’univers dans ses plus grandes dimensions et l’appréhension de son comportement aux plus petites distances sont intimement liées : en quoi le LHC va-t-il tester expérimentalement cette vision unifiée ? Tout public, entrée libre / Réservations au +41 (0)22 767 76 76« less
ERIC Educational Resources Information Center
Abawi, Karim; Gertiser, Lynn; Idris, Raqibat; Villar, José; Langer, Ana; Chatfield, Alison; Campana, Aldo
2017-01-01
Postpartum hemorrhage (PPH) is the leading cause of maternal mortality in most developing and low-income countries and the cause of one-quarter of maternal deaths worldwide. With appropriate and prompt care, these deaths can be prevented. With the current and rapidly developing research and worldwide access to information, a lack of knowledge of…
Worldwide OMEGA and Very Low Frequency (VLF) Transmitter Outages, January to December 1980.
1981-05-01
WORLDWIDE OMEGA AND VERY LOW FREQUENCY IVLF) TRANSMITTER OUTAGE--ETC, MAY 81 L RZONCA ,’,L.ASSI LED FAA-CT-81-26 FAA-RD- B1 -29 UL7 A-I’ l15FDRL AIO...computer for the time period GBR - Rugby , England (16.00 kHz) January to December 1980. (For the purposes of this report, any downtime NA - Cutler, Maine
NASA Astrophysics Data System (ADS)
Rabemananajara, Tanjona R.; Horowitz, W. A.
2017-09-01
To make predictions for the particle physics processes, one has to compute the cross section of the specific process as this is what one can measure in a modern collider experiment such as the Large Hadron Collider (LHC) at CERN. Theoretically, it has been proven to be extremely difficult to compute scattering amplitudes using conventional methods of Feynman. Calculations with Feynman diagrams are realizations of a perturbative expansion and when doing calculations one has to set up all topologically different diagrams, for a given process up to a given order of coupling in the theory. This quickly makes the calculation of scattering amplitudes a hot mess. Fortunately, one can simplify calculations by considering the helicity amplitude for the Maximally Helicity Violating (MHV). This can be extended to the formalism of on-shell recursion, which is able to derive, in a much simpler way the expression of a high order scattering amplitude from lower orders.
A Dashboard for the Italian Computing in ALICE
NASA Astrophysics Data System (ADS)
Elia, D.; Vino, G.; Bagnasco, S.; Crescente, A.; Donvito, G.; Franco, A.; Lusso, S.; Mura, D.; Piano, S.; Platania, G.; ALICE Collaboration
2017-10-01
A dashboard devoted to the computing in the Italian sites for the ALICE experiment at the LHC has been deployed. A combination of different complementary monitoring tools is typically used in most of the Tier-2 sites: this makes somewhat difficult to figure out at a glance the status of the site and to compare information extracted from different sources for debugging purposes. To overcome these limitations a dedicated ALICE dashboard has been designed and implemented in each of the ALICE Tier-2 sites in Italy: in particular, it provides a single, interactive and easily customizable graphical interface where heterogeneous data are presented. The dashboard is based on two main ingredients: an open source time-series database and a dashboard builder tool for visualizing time-series metrics. Various sensors, able to collect data from the multiple data sources, have been also written. A first version of a national computing dashboard has been implemented using a specific instance of the builder to gather data from all the local databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caola, Fabrizio; Melnikov, Kirill; Rontsch, Raoul
We compute the next-to-leading-order QCD corrections to the production of two Z-bosons in the annihilation of two gluons at the LHC. Being enhanced by a large gluon flux, these corrections provide a distinct and, potentially, the dominant part of the N 3LO QCD contributions to Z-pair production in proton collisions. The gg → ZZ annihilation is a loop-induced process that receives the dominant contribution from loops of five light quarks, that are included in our computation in the massless approximation. We find that QCD corrections increase the gg → ZZ production cross section by O(50%–100%) depending on the values ofmore » the renormalization and factorization scales used in the leading-order computation and the collider energy. Furthermore, the large corrections to the gg → ZZ channel increase the pp → ZZ cross section by about 6% to 8%, exceeding the estimated theoretical uncertainty of the recent next-to-next-to-leading-order QCD calculation.« less
Graphical processors for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-02-01
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.
SUSY searches at the LHC with the ATLAS experiment
D' Onofrio, Monica
2017-12-18
First ATLAS searches for signals of Supersymmetry in proton-proton collisions at the LHC are presented. These searches are performed in various channels containing different lepton and jet multiplicities in the final states; the full data sample recorded in the 2010 LHC run, corresponding to an integrated luminosity of 35 pb-1, has been analysed. Limits on squarks and gluins are the most stringent to date.
Lansberg, J. P.; Anselmino, M.; Arnaldi, R.; ...
2016-11-19
Here we discuss the potential of AFTER@LHC to measure single-transverse-spin asymmetries in open-charm and bottomonium production. With a HERMES-like hydrogen polarised target, such measurements over a year can reach precisions close to the per cent level. This is particularly remarkable since these analyses can probably not be carried out anywhere else.
Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.
2014-06-01
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT
NASA Astrophysics Data System (ADS)
Wynne, Ben; ATLAS Collaboration
2017-10-01
We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent execution of algorithms within an event. This has the potential to significantly reduce the memory footprint on future manycore devices. An additional benefit of the HLT implementation within AthenaMT is that it facilitates the integration of offline code into the HLT. The trigger must retain high rejection in the face of increasing numbers of pileup collisions. This will be achieved by greater use of offline algorithms that are designed to maximize the discrimination of signal from background. Therefore a unification of the HLT and offline reconstruction software environment is required. This has been achieved while at the same time retaining important HLT-specific optimisations that minimize the computation performed to reach a trigger decision. Such optimizations include early event rejection and reconstruction within restricted geometrical regions. We report on an HLT prototype in which the need for HLT-specific components has been reduced to a minimum. Promising results have been obtained with a prototype that includes the key elements of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger selections to this new framework and present the next steps towards a full implementation of the ATLAS trigger.
A Study of ATLAS Grid Performance for Distributed Analysis
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Fine, Valery; Wenaus, Torre
2012-12-01
In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.
Next-to-leading order γ γ + 2 - jet production at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bern, Z.; Dixon, L. J.; Febres Cordero, F.
We present next-to-leading-order QCD predictions for cross sections and for a comprehensive set of distributions in γγ+2-jet production at the Large Hadron Collider. We consider the contributions from loop amplitudes for two photons and four gluons, but we neglect top quarks. We use BlackHat together with SHERPA to carry out the computation. We use a Frixione cone isolation for the photons. We study standard sets of cuts on the jets and the photons and also sets of cuts appropriate for studying backgrounds to Higgs-boson production via vector-boson fusion.
3D detectors with high space and time resolution
NASA Astrophysics Data System (ADS)
Loi, A.
2018-01-01
For future high luminosity LHC experiments it will be important to develop new detector systems with increased space and time resolution and also better radiation hardness in order to operate in high luminosity environment. A possible technology which could give such performances is 3D silicon detectors. This work explores the possibility of a pixel geometry by designing and simulating different solutions, using Sentaurus Tecnology Computer Aided Design (TCAD) as design and simulation tool, and analysing their performances. A key factor during the selection was the generated electric field and the carrier velocity inside the active area of the pixel.
Associated Higgs-W-boson production at hadron colliders: a fully exclusive QCD calculation at NNLO.
Ferrera, Giancarlo; Grazzini, Massimiliano; Tramontano, Francesco
2011-10-07
We consider QCD radiative corrections to standard model Higgs-boson production in association with a W boson in hadron collisions. We present a fully exclusive calculation up to next-to-next-to-leading order (NNLO) in QCD perturbation theory. To perform this NNLO computation, we use a recently proposed version of the subtraction formalism. Our calculation includes finite-width effects, the leptonic decay of the W boson with its spin correlations, and the decay of the Higgs boson into a bb pair. We present selected numerical results at the Tevatron and the LHC.
Next Generation Workload Management and Analysis System for Big Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, Kaushik
We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlingtonmore » (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.« less
Determination of the event collision time with the ALICE detector at the LHC
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; An, M.; Andrei, C.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buitron, S. A. I.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crkovská, J.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; De Souza, R. D.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Di Ruzza, B.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Garg, P.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Gay Ducati, M. B.; Germain, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hladky, J.; Horak, D.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Isakov, V.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lazaridis, L.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Llope, W.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Mishra, T.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montes, E.; Moreira De Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao De Oliveira, R. A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Palni, P.; Pan, J.; Pandey, A. K.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira Da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Windelband, B.; Winn, M.; Witt, W. E.; Yalcin, S.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zmeskal, J.
2017-02-01
Particle identification is an important feature of the ALICE detector at the LHC. In particular, for particle identification via the time-of-flight technique, the precise determination of the event collision time represents an important ingredient of the quality of the measurement. In this paper, the different methods used for such a measurement in ALICE by means of the T0 and the TOF detectors are reviewed. Efficiencies, resolution and the improvement of the particle identification separation power of the methods used are presented for the different LHC colliding systems (pp, p-Pb and Pb-Pb) during the first period of data taking of LHC (RUN 1).
The landscape of W± and Z bosons produced in pp collisions up to LHC energies
NASA Astrophysics Data System (ADS)
Basso, Eduardo; Bourrely, Claude; Pasechnik, Roman; Soffer, Jacques
2017-10-01
We consider a selection of recent experimental results on electroweak W± , Z gauge boson production in pp collisions at BNL RHIC and CERN LHC energies in comparison to prediction of perturbative QCD calculations based on different sets of NLO parton distribution functions including the statistical PDF model known from fits to the DIS data. We show that the current statistical PDF parametrization (fitted to the DIS data only) underestimates the LHC data on W± , Z gauge boson production cross sections at the NLO by about 20%. This suggests that there is a need to refit the parameters of the statistical PDF including the latest LHC data.
Explorer : des clés pour mieux comprendre la matière
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Jonathan R.
2011-02-14
Will the LHC upset theories of the infinitely small? Physicists would like the accelerator to shake the standard model. This theory of elementary particles and forces leaves many gray areas. The LHC and its experiments have been designed to enlighten them. [Le LHC va-t-il bouleverser les théories de l'infiniment petit ? Les physiciens aimeraient que l'accélérateur fasse trembler le modèle standard. Cette théorie des particules élémentaires et des forces laisse de nombreuses zones d'ombre. Le LHC et ses expériences ont été conçus pour les éclairer.
Machine Protection with a 700 MJ Beam
NASA Astrophysics Data System (ADS)
Baer, T.; Schmidt, R.; Wenninger, J.; Wollmann, D.; Zerlauth, M.
After the high luminosity upgrade of the LHC, the stored energy per proton beam will increase by a factor of two as compared to the nominal LHC. Therefore, many damage studies need to be revisited to ensure a safe machine operation with the new beam parameters. Furthermore, new accelerator equipment like crab cavities might cause new failure modes, which are not sufficiently covered by the current machine protection system of the LHC. These failure modes have to be carefully studied and mitigated by new protection systems. Finally the ambitious goals for integrated luminosity delivered to the experiments during the era of HL-LHC require an increase of the machine availability without jeopardizing equipment protection.
The CMS High Granularity Calorimeter for the High Luminosity LHC
NASA Astrophysics Data System (ADS)
Sauvan, J.-B.
2018-02-01
The High Luminosity LHC (HL-LHC) will integrate 10 times more luminosity than the LHC, posing significant challenges for radiation tolerance and event pileup on detectors, especially for forward calorimetry, and hallmarks the issue for future colliders. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. It features unprecedented transverse and longitudinal segmentation for both electromagnetic (ECAL) and hadronic (HCAL) compartments. This will facilitate particle-flow calorimetry, where the fine structure of showers can be measured and used to enhance pileup rejection and particle identification, whilst still achieving good energy resolution. The ECAL and a large fraction of HCAL will be based on hexagonal silicon sensors of 0.5-1 cm2 cell size, with the remainder of the HCAL based on highly-segmented scintillators with silicon photomultiplier (SiPM) readout. The intrinsic high-precision timing capabilities of the silicon sensors will add an extra dimension to event reconstruction, especially in terms of pileup rejection.
Constraints on the gluon PDF from top quark pair production at hadron colliders
NASA Astrophysics Data System (ADS)
Czakon, Michal; Mangano, Michelangelo L.; Mitov, Alexander; Rojo, Juan
2013-07-01
Using the recently derived NNLO cross sections [1], we provide NNLO+NNLL theoretical predictions for top quark pair production based on all the available NNLO PDF sets, and compare them with the most precise LHC and Tevatron data. In this comparison we study in detail the PDF uncertainty and the scale, m t and α s dependence of the theoretical predictions for each PDF set. Next, we observe that top quark pair production provides a powerful direct constraint on the gluon PDF at large x, and include Tevatron and LHC top pair data consistently into a global NNLO PDF fit. We then explore the phenomenological consequences of the reduced gluon PDF uncertainties, by showing how they can improve predictions for Beyond the Standard Model processes at the LHC. Finally, we update to full NNLO+NNLL the theoretical predictions for the ratio of top quark cross sections between different LHC center of mass energies, as well as the cross sections for hypothetical heavy fourth-generation quark production at the LHC.
Solid state photosensitive devices which employ isolated photosynthetic complexes
Peumans, Peter; Forrest, Stephen R.
2009-09-22
Solid state photosensitive devices including photovoltaic devices are provided which comprise a first electrode and a second electrode in superposed relation; and at least one isolated Light Harvesting Complex (LHC) between the electrodes. Preferred photosensitive devices comprise an electron transport layer formed of a first photoconductive organic semiconductor material, adjacent to the LHC, disposed between the first electrode and the LHC; and a hole transport layer formed of a second photoconductive organic semiconductor material, adjacent to the LHC, disposed between the second electrode and the LHC. Solid state photosensitive devices of the present invention may comprise at least one additional layer of photoconductive organic semiconductor material disposed between the first electrode and the electron transport layer; and at least one additional layer of photoconductive organic semiconductor material, disposed between the second electrode and the hole transport layer. Methods of generating photocurrent are provided which comprise exposing a photovoltaic device of the present invention to light. Electronic devices are provided which comprise a solid state photosensitive device of the present invention.
None
2018-06-26
The LHC official inauguration will take place from 14h00 to 18h00, at Point 18 of the Laboratory, in the presence of the highest representatives from the member states of CERN and representatives from the other communities and authorities of the countries participating in the LHC adventure. 300 members from the international press are also expected, giving a total of 1500 guests. The ceremony will be broadcast live in the Laboratoryâs main conference rooms, via webcast and satellite TV (Eurovision). The LHC-fest will follow in the evening in the same place. Its purpose is to, "thank all the actors â physicists, engineers, technicians and administrators â who took part in the design, construction, implementation and commissioning of this great enterprise." For obvious logistical reasons, it has been necessary to limit the number of invited guests to 3000, to include all members of personnel (blue badge holders), representatives of the LHC experiments and other users, as well as representatives from retired staff and industrial support.
The LHCf experiment at the LHC: Physics Goals and Status
NASA Astrophysics Data System (ADS)
Tricomi, A.; Adriani, O.; Bonechi, L.; Bongi, M.; Castellini, G.; D'Alessandro, R.; Faus, A.; Fukui, K.; Haguenauer, M.; Itow, Y.; Kasahara, K.; Macina, D.; Mase, T.; Masuda, K.; Matsubara, Y.; Menjo, H.; Mizuishi, M.; Muraki, Y.; Papini, P.; Perrot, A. L.; Ricciarini, S.; Sako, T.; Shimizu, Y.; Taki, K.; Tamura, T.; Torii, S.; Turner, W. C.; Velasco, J.; Viciani, A.; Yoshida, K.
2009-12-01
The LHCf experiment is the smallest of the six experiments installed at the Large Hadron Collider (LHC). While the general purpose detectors have been mainly designed to answer the open questions of Elementary Particle Physics, LHCf has been designed as a fully devoted Astroparticle experiment at the LHC. Indeed, thanks to the excellent performances of its double arm calorimeters, LHCf will be able to measure the flux of neutral particles produced in p-p collisions at LHC in the very forward region, thus providing an invaluable help in the calibration of air-shower Monte Carlo codes currently used for modeling cosmic rays interactions in the Earth atmosphere. Depending on the LHC machine schedule, LHCf will take data in an energy range from 900 GeV up to 14 TeV in the centre of mass system (equivalent to 10 eV in the laboratory frame), thus covering one of the most interesting and debated region of the Cosmic Ray spectrum, the region around and beyond the "knee".
Numerical simulations of a proposed hollow electron beam collimator for the LHC upgrade at CERN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Previtali, V.; Stancari, G.; Valishev, A.
2013-07-12
In the last years the LHC collimation system has been performing over the expectations, providing the machine with a nearly perfect e cient cleaning system[1]. Nonetheless, when trying to push the existing accelerators to - and over - their design limits, all the accelerator components are required to boost their performances. In particular, in view of the high luminosity frontier for the LHC, the increased intensity would ask for a more e cient cleaning system. In this framework innovative collimation solutions are under evaluation[2]: one option is the usage of an hollow electron lens for beam halo cleaning. This workmore » intends to study the applicability of an the hollow electron lens for the LHC collimation, by evaluating the case of the existing Tevatron e-lens applied to the nominal LHC 7 TeV beam. New e-lens operation modes are here proposed to standard enhance the electron lens halo removal e ect.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chlachidze, G.; et al.
2016-08-30
The US LHC Accelerator Research Program (LARP) and CERN combined their efforts in developing Nb3Sn magnets for the High-Luminosity LHC upgrade. The ultimate goal of this collaboration is to fabricate large aperture Nb3Sn quadrupoles for the LHC interaction regions (IR). These magnets will replace the present 70 mm aperture NbTi quadrupole triplets for expected increase of the LHC peak luminosity by a factor of 5. Over the past decade LARP successfully fabricated and tested short and long models of 90 mm and 120 mm aperture Nb3Sn quadrupoles. Recently the first short model of 150 mm diameter quadrupole MQXFS was builtmore » with coils fabricated both by the LARP and CERN. The magnet performance was tested at Fermilab’s vertical magnet test facility. This paper reports the test results, including the quench training at 1.9 K, ramp rate and temperature dependence studies.« less
Seesaw at Lhc Through Left-Right Symmetry
NASA Astrophysics Data System (ADS)
Senjanović, Goran
I argue that LHC may shed light on the nature of neutrino mass through the probe of the seesaw mechanism. The smoking gun signature is lepton number violation through the production of same sign lepton pairs, a collider analogy of the neutrinoless double beta decay. I discuss this in the context of left-right symmetric theories, which led originally to neutrino mass and the seesaw mechanism. A WR gauge boson with a mass in a few TeV region could easily dominate neutrinoless double beta decay, and its discovery at LHC would have spectacular signatures of parity restoration and lepton number violation. Moreover, LHC can measure the masses of the right-handed neutrinos and the right-handed leptonic mixing matrix, which could in turn be used to predict the rates for neutrinoless double decay and lepton flavor violating violating processes. The LR scale at the LHC energies offers great hope of observing these low energy processes in the present and upcoming experiments.
Lincoln, Don
2018-01-16
The Large Hadron Collider or LHC is the worldâs biggest particle accelerator, but it can only get particles moving very quickly. To make measurements, scientists must employ particle detectors. There are four big detectors at the LHC: ALICE, ATLAS, CMS, and LHCb. In this video, Fermilabâs Dr. Don Lincoln introduces us to these detectors and gives us an idea of each oneâs capabilities.
Closing in on the chargino contribution to the muon g -2 in the MSSM: Current LHC constraints
NASA Astrophysics Data System (ADS)
Hagiwara, Kaoru; Ma, Kai; Mukhopadhyay, Satyanarayan
2018-03-01
We revisit the current LHC constraints on the electroweak-ino sector parameters in the minimal supersymmetric standard model (MSSM) that are relevant to explaining the (g -2 )μ anomaly via the dominant chargino and muon sneutrino loop. Since the LHC bounds on electroweak-inos become weaker if they decay via an intermediate stau or a tau sneutrino instead of the first two generation sleptons, we perform a detailed analysis of the scenario with a bino as the lightest supersymmetric particle (LSP) and a light stau as the next-to-lightest one (NLSP). Even in this scenario, the chargino sector parameters in the MSSM that can account for the (g -2 )μ anomaly within 1 σ are already found to be significantly constrained by the 8 TeV LHC and the available subset of the 13 TeV LHC limits. We also estimate the current LHC exclusions in the left-smuon (and/or left-selectron) NLSP scenario from multilepton searches, and further combine the constraints from the multitau and multilepton channels for a mass spectrum in which all three generations of sleptons are lighter than the chargino. In the latter two cases, small corners of the 1 σ favored region for (g -2 )μ are still allowed at present.
Hydration Characteristics of Low-Heat Cement Substituted by Fly Ash and Limestone Powder.
Kim, Si-Jun; Yang, Keun-Hyeok; Moon, Gyu-Don
2015-09-01
This study proposed a new binder as an alternative to conventional cement to reduce the heat of hydration in mass concrete elements. As a main cementitious material, low-heat cement (LHC) was considered, and then fly ash (FA), modified FA (MFA) by vibrator mill, and limestone powder (LP) were used as a partial replacement of LHC. The addition of FA delayed the induction period at the hydration heat curve and the maximum heat flow value ( q max ) increased compared with the LHC based binder. As the proportion and fineness of the FA increased, the induction period of the hydration heat curve was extended, and the q max increased. The hydration production of Ca(OH)₂ was independent of the addition of FA or MFA up to an age of 7 days, beyond which the amount of Ca(OH)₂ gradually decreased owing to their pozzolanic reaction. In the case of LP being used as a supplementary cementitious material, the induction period of the hydration heat curve was reduced by comparison with the case of LHC based binder, and monocarboaluminate was observed as a hydration product. The average pore size measured at an age of 28 days was smaller for LHC with FA or MFA than for 100% LHC.
NASA Astrophysics Data System (ADS)
Denz, R.; Gharib, A.; Hagedorn, D.
2004-06-01
For the protection of the LHC superconducting magnets about 2100 specially developed by-pass diodes have been manufactured in industry and more than one thousand of these diodes have been mounted into stacks and tested in liquid helium. By-pass diode samples, taken from the series production, have been submitted to irradiation tests at cryogenic temperatures together with some prototype diodes up to an accumulated dose of about 2 kGy and neutron fluences up to about 3.0 1013 n cm-2 with and without intermediate warm up to 300 K. The device characteristics of the diodes under forward bias and reverse bias have been measured at 77 K and ambient versus dose and the results are presented. Using a thermo-electrical model and new estimates for the expected dose in the LHC, the expected lifetime of the by-pass diodes has been estimated for various positions in the LHC arcs. It turns out that for all of the by-pass diodes across the arc elements the radiation resistance is largely sufficient. In the dispersion suppresser regions of the LHC, on a few diodes annual annealing during the shut down of the LHC must be applied or those diodes may need to be replaced after some time.
One-family walking technicolor in light of LHC Run II
NASA Astrophysics Data System (ADS)
Matsuzaki, Shinya
2017-12-01
The LHC Higgs can be identified as the technidilaton, a composite scalar, arising as a pseudo Nambu-Goldstone boson for the spontaneous breaking of scale symmetry in walking technicolor. One interesting candidate for the walking technicolor is the QCD with the large number of fermion flavors, involving the one-family model having the eight-fermion flavors. The smallness of the technidilaton mass can be ensured by the generic walking feature, Miransky scaling, and the presence of the “anti-Veneziano limit” characteristic to the large-flavor walking scenario. To tell the standard-model Higgs from the technidilaton, one needs to wait for the precise estimate of the Higgs couplings to the standard model particles, which is expected at the ongoing LHC Run II. In this talk the technidilaton phenomenology in comparison with the LHC Run-I data is summarized with the special emphasis placed on the presence of the anti-Veneziano limit supporting the lightness of technidilaton. Besides the technidilaton, the walking technicolor predicts the rich particle spectrum such as technipions and technirho mesons, arising as composite particles formed by technifermions. The LHC phenomenology of those technihadrons and the discovery channels are also discussed, which are smoking-guns of the walking technicolor, to be accessible at the LHC Run II.
Upgrade of the ATLAS Hadronic Tile Calorimeter for the High Luminosity LHC
NASA Astrophysics Data System (ADS)
Tortajada, Ignacio Asensi
2018-01-01
The Large Hadron Collider (LHC) has envisaged a series of upgrades towards a High Luminosity LHC (HL-LHC) delivering five times the LHC nominal instantaneous luminosity. The ATLAS Phase II upgrade, in 2024, will accommodate the upgrade of the detector and data acquisition system for the HL-LHC. The Tile Calorimeter (TileCal) will undergo a major replacement of its on- and off-detector electronics. In the new architecture, all signals will be digitized and then transferred directly to the off-detector electronics, where the signals will be reconstructed, stored, and sent to the first level of trigger at the rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. Changes to the electronics will also contribute to the reliability and redundancy of the system. Three different front-end options are presently being investigated for the upgrade, two of them based on ASICs, and a final solution will be chosen after extensive laboratory and test beam studies that are in progress. A hybrid demonstrator module is being developed using the new electronics while conserving compatibility with the current system. The status of the developments will be presented, including results from the several tests with particle beams.
Design and implementation of a crystal collimation test stand at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Mirarchi, D.; Hall, G.; Redaelli, S.; Scandale, W.
2017-06-01
Future upgrades of the CERN Large Hadron Collider (LHC) demand improved cleaning performance of its collimation system. Very efficient collimation is required during regular operations at high intensities, because even a small amount of energy deposited on superconducting magnets can cause an abrupt loss of superconducting conditions (quench). The possibility to use a crystal-based collimation system represents an option for improving both cleaning performance and impedance compared to the present system. Before relying on crystal collimation for the LHC, a demonstration under LHC conditions (energy, beam parameters, etc.) and a comparison against the present system is considered mandatory. Thus, a prototype crystal collimation system has been designed and installed in the LHC during the Long Shutdown 1 (LS1), to perform feasibility tests during the Run 2 at energies up to 6.5 TeV. The layout is suitable for operation with proton as well as heavy ion beams. In this paper, the design constraints and the solutions proposed for this test stand for feasibility demonstration of crystal collimation at the LHC are presented. The expected cleaning performance achievable with this test stand, as assessed in simulations, is presented and compared to that of the present LHC collimation system. The first experimental observation of crystal channeling in the LHC at the record beam energy of 6.5 TeV has been obtained in 2015 using the layout presented (Scandale et al., Phys Lett B 758:129, 2016). First tests to measure the cleaning performance of this test stand have been carried out in 2016 and the detailed data analysis is still on-going.
Main improvements of LHC Cryogenics Operation during Run 2 (2015-2018)
NASA Astrophysics Data System (ADS)
Delprat, L.; Bradu, B.; Brodzinski, K.; Ferlin, G.; Hafi, K.; Herblin, L.; Rogez, E.; Suraci, A.
2017-12-01
After the successful Run 1 (2010-2012), the LHC entered its first Long Shutdown period (LS1, 2013-2014). During LS1 the LHC cryogenic system went under a complete maintenance and consolidation program. The LHC resumed operation in 2015 with an increased beam energy from 4 TeV to 6.5 TeV. Prior to the new physics Run 2 (2015-2018), the LHC was progressively cooled down from ambient to the 1.9 K operation temperature. The LHC has resumed operation with beams in April 2015. Operational margins on the cryogenic capacity were reduced compared to Run 1, mainly due to the observed higher than expected electron-cloud heat load coming from increased beam energy and intensity. Maintaining and improving the cryogenic availability level required the implementation of a series of actions in order to deal with the observed heat loads. This paper describes the results from the process optimization and update of the control system, thus allowing the adjustment of the non-isothermal heat load at 4.5 - 20 K and the optimized dynamic behaviour of the cryogenic system versus the electron-cloud thermal load. Effects from the new regulation settings applied for operation on the electrical distribution feed-boxes and inner triplets will be discussed. The efficiency of the preventive and corrective maintenance, as well as the benefits and issues of the present cryogenic system configuration for Run 2 operational scenario will be described. Finally, the overall availability results and helium management of the LHC cryogenic system during the 2015-2016 operational period will be presented.
Monoclonal antibodies to the light-harvesting chlorophyll a/b protein complex of photosystem II
1986-01-01
A collection of 17 monoclonal antibodies elicited against the light- harvesting chlorophyll a/b protein complex which serves photosystem II (LHC-II) of Pisum sativum shows six classes of binding specificity. Antibodies of two of the classes recognize a single polypeptide (the 28- or the 26- kD polypeptides), thereby suggesting that the two proteins are not derived from a common precursor. Other classes of antibodies cross-react with several polypeptides of LHC-II or with polypeptides of both LHC-II and the light-harvesting chlorophyll a/b polypeptides of photosystem I (LHC-I), indicating that there are structural similarities among the polypeptides of LHC-II and LHC-I. The evidence for protein processing by which the 26-, 25.5-, and 24.5-kD polypeptides are derived from a common precursor polypeptide is discussed. Binding studies using antibodies specific for individual LHC- II polypeptides were used to quantify the number of antigenic polypeptides in the thylakoid membrane. 27 copies of the 26-kD polypeptide and two copies of the 28-kD polypeptide were found per 400 chlorophylls. In the chlorina f2 mutant of barley, and in intermittent light-treated barley seedlings, the amount of the 26-kD polypeptide in the thylakoid membranes was greatly reduced, while the amount of 28-kD polypeptide was apparently not affected. We propose that stable insertion and assembly of the 28-kD polypeptide, unlike the 26-kD polypeptide, is not regulated by the presence of chlorophyll b. PMID:3528171
Tanaka, Ryouichi; Rothbart, Maxi; Oka, Seiko; Takabayashi, Atsushi; Takahashi, Kaori; Shibata, Masaru; Myouga, Fumiyoshi; Motohashi, Reiko; Shinozaki, Kazuo; Grimm, Bernhard
2010-01-01
The light-harvesting chlorophyll-binding (LHC) proteins are major constituents of eukaryotic photosynthetic machinery. In plants, six different groups of proteins, LHC-like proteins, share a conserved motif with LHC. Although the evolution of LHC and LHC-like proteins is proposed to be a key for the diversification of modern photosynthetic eukaryotes, our knowledge of the evolution and functions of LHC-like proteins is still limited. In this study, we aimed to understand specifically the function of one type of LHC-like proteins, LIL3 proteins, by analyzing Arabidopsis mutants lacking them. The Arabidopsis genome contains two gene copies for LIL3, LIL3:1 and LIL3:2. In the lil3:1/lil3:2 double mutant, the majority of chlorophyll molecules are conjugated with an unsaturated geranylgeraniol side chain. This mutant is also deficient in α-tocopherol. These results indicate that reduction of both the geranylgeraniol side chain of chlorophyll and geranylgeranyl pyrophosphate, which is also an essential intermediate of tocopherol biosynthesis, is compromised in the lil3 mutants. We found that the content of geranylgeranyl reductase responsible for these reactions was severely reduced in the lil3 double mutant, whereas the mRNA level for this enzyme was not significantly changed. We demonstrated an interaction of geranylgeranyl reductase with both LIL3 isoforms by using a split ubiquitin assay, bimolecular fluorescence complementation, and combined blue-native and SDS polyacrylamide gel electrophoresis. We propose that LIL3 is functionally involved in chlorophyll and tocopherol biosynthesis by stabilizing geranylgeranyl reductase. PMID:20823244
Computational Astrophysics Towards Exascale Computing and Big Data
NASA Astrophysics Data System (ADS)
Astsatryan, H. V.; Knyazyan, A. V.; Mickaelian, A. M.
2016-06-01
Traditionally, Armenia has a leading position both within the computer science and Information Technology and Astronomy and Astrophysics sectors in the South Caucasus region and beyond. For instance recent years Information Technology (IT) became one of the fastest growing industries of the Armenian economy (EIF 2013). The main objective of this article is to highlight the key activities that will spur Armenia to strengthen its computational astrophysics capacity thanks to the analysis made of the current trends of e-Infrastructures worldwide.
A Fundamental Methodology for Designing Management Information Systems for Schools.
ERIC Educational Resources Information Center
Visscher, Adrie J.
Computer-assisted school information systems (SISs) are developed and used worldwide; however, the literature on strategies for their design and development is lacking. This paper presents the features of a fundamental approach to systems design that proved to be successful when developing SCHOLIS, a computer-assisted SIS for Dutch secondary…
Their World, Our World--Bridging the Divide
ERIC Educational Resources Information Center
Oldknow, Adrian
2009-01-01
Students worldwide are gaining access to powerful computing devices and services. They are learning from each other how to find and share information, how to carry out useful tasks such as editing images and video, where to find the best entertainment, etc. Classrooms, especially in the UK, are changing to make better access to computer technology…
The LHCb software and computing upgrade for Run 3: opportunities and challenges
NASA Astrophysics Data System (ADS)
Bozzi, C.; Roiser, S.; LHCb Collaboration
2017-10-01
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.
Improvements in Empirical Modelling of the World-Wide Ionosphere
1986-10-31
The fact that the world-wide distribution of the basic inputs is far from being uniform forced Gallet and Jones to develop . a special procedure for...made the result more reasonable, it had a disappointing effect on the latitudinal variation over the oceans. Feeling that shifting along circles of...range. Ano- ther vertical profile model is used in the Bent-model4 which is applied in NASA practice for computing the different effects of
Design of FPGA-based radiation tolerant quench detectors for LHC
NASA Astrophysics Data System (ADS)
Steckert, J.; Skoczen, A.
2017-04-01
The Large Hadron Collider (LHC) comprises many superconducting circuits. Most elements of these circuits require active protection. The functionality of the quench detectors was initially implemented as microcontroller based equipment. After the initial stage of the LHC operation with beams the introduction of a new type of quench detector began. This article presents briefly the main ideas and architectures applied to the design and the validation of FPGA-based quench detectors.
Loss Control and Collimation for the LHC
NASA Astrophysics Data System (ADS)
Burkhardt, H.
2005-06-01
The total energy stored in the LHC is expected to reach 360 Mega Joule, which is about two orders of magnitude higher than in HERA or the Tevatron. Damage and quench protection in the LHC require a highly efficient and at the same time very robust collimation system. The currently planned system, the status of the project and the expected performance of the collimation system from injection up to operation with colliding beams will be presented.
Large Hadron Collider commissioning and first operation.
Myers, S
2012-02-28
A history of the commissioning and the very successful early operation of the Large Hadron Collider (LHC) is described. The accident that interrupted the first commissioning, its repair and the enhanced protection system put in place are fully described. The LHC beam commissioning and operational performance are reviewed for the period from 2010 to mid-2011. Preliminary plans for operation and future upgrades for the LHC are given for the short and medium term.
Exergy Analysis of the Cryogenic Helium Distribution System for the Large Hadron Collider (lhc)
NASA Astrophysics Data System (ADS)
Claudet, S.; Lebrun, Ph.; Tavian, L.; Wagner, U.
2010-04-01
The Large Hadron Collider (LHC) at CERN features the world's largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility.
Explorer : des clés pour mieux comprendre la matière
Ellis, Jonathan R.
2018-01-12
Will the LHC upset theories of the infinitely small? Physicists would like the accelerator to shake the standard model. This theory of elementary particles and forces leaves many gray areas. The LHC and its experiments have been designed to enlighten them. [Le LHC va-t-il bouleverser les théories de l'infiniment petit ? Les physiciens aimeraient que l'accélérateur fasse trembler le modèle standard. Cette théorie des particules élémentaires et des forces laisse de nombreuses zones d'ombre. Le LHC et ses expériences ont été conçus pour les éclairer.
Kurbanoglu, Serap; Boustany, Joumana
2018-01-01
This study reports the descriptive and inferential statistical findings of a survey of academic reading format preferences and behaviors of 10,293 tertiary students worldwide. The study hypothesized that country-based differences in schooling systems, socioeconomic development, culture or other factors might have an influence on preferred formats, print or electronic, for academic reading, as well as the learning engagement behaviors of students. The main findings are that country of origin has little to no relationship with or effect on reading format preferences of university students, and that the broad majority of students worldwide prefer to read academic course materials in print. The majority of participants report better focus and retention of information presented in print formats, and more frequently prefer print for longer texts. Additional demographic and post-hoc analysis suggests that format preference has a small relationship with academic rank. The relationship between task demands, format preferences and reading comprehension are discussed. Additional outcomes and implications for the fields of education, psychology, computer science, information science and human-computer interaction are considered. PMID:29847560
Vick, T J; Dodsworth, J A; Costa, K C; Shock, E L; Hedlund, B P
2010-03-01
A culture-independent community census was combined with chemical and thermodynamic analyses of three springs located within the Long Valley Caldera, Little Hot Creek (LHC) 1, 3, and 4. All three springs were approximately 80 degrees C, circumneutral, apparently anaerobic and had similar water chemistries. 16S rRNA gene libraries constructed from DNA isolated from spring sediment revealed moderately diverse but highly novel microbial communities. Over half of the phylotypes could not be grouped into known taxonomic classes. Bacterial libraries from LHC1 and LHC3 were predominantly species within the phyla Aquificae and Thermodesulfobacteria, while those from LHC4 were dominated by candidate phyla, including OP1 and OP9. Archaeal libraries from LHC3 contained large numbers of Archaeoglobales and Desulfurococcales, while LHC1 and LHC4 were dominated by Crenarchaeota unaffiliated with known orders. The heterogeneity in microbial populations could not easily be attributed to measurable differences in water chemistry, but may be determined by availability of trace amounts of oxygen to the spring sediments. Thermodynamic modeling predicted the most favorable reactions to be sulfur and nitrate respirations, yielding 40-70 kJ mol(-1) e(-) transferred; however, levels of oxygen at or below our detection limit could result in aerobic respirations yielding up to 100 kJ mol(-1) e(-) transferred. Important electron donors are predicted to be H(2), H(2)S, S(0), Fe(2+) and CH(4), all of which yield similar energies when coupled to a given electron acceptor. The results indicate that springs associated with the Long Valley Caldera contain microbial populations that show some similarities both to springs in Yellowstone and springs in the Great Basin.
New Physics Undercover at the LHC
NASA Astrophysics Data System (ADS)
Lou, Hou Keong
With the completion of 7 TeV and 8 TeV data taking at the Large Hadron Collider (LHC), the physics community witnessed one of the great triumphs of modern physics: the completion of the Standard Model (SM) as an effective theory. The final missing particle, the Higgs boson, was observed and its mass was measured. However, many theoretical questions remain unanswered. What is the source of electroweak symmetry breaking? What is the nature of dark matter? How does gravity fit into the picture? With no definitive hints of new physics at the LHC, we must consider the possibility that our search strategies need to be expanded. Conventional LHC searches focus on theoretically motivated scenarios, such as the Minimal Supersymmetric Standard Models and Little Higgs Theories. However, it is possible that new physics may be entirely different from what we might expect. In this thesis, we examine a variety of scenarios that lead to new physics undercover at the LHC. First we look at potential new physics hiding in Quantum Chromo-Dynamics backgrounds, which may be uncovered using jet substructure techniques in a data-driven way. Then we turn to new long-lived particles hiding in Higgs decay, which may lead to displaced vertices. Such a signal can be unearthed through a data-driven analysis. Then we turn to new physics with ``semi-visible jets'', which lead to missing momentum aligned with jet momentum. These events are vetoed in traditional searches and we demonstrate ways to uncover these signals. Lastly, we explore performance of future colliders in two case studies: Stops and Higgs Portal searches. We show that a 100 TeV collider will lead to significant improvements over 14 TeV LHC runs. Indeed, new physics may lie undercover at the LHC and future colliders, waiting to be discovered.
Predicted and Totally Unexpected in the Energy Frontier Opened by LHC
NASA Astrophysics Data System (ADS)
Zichichi, Antonino
2011-01-01
Opening lectures. Sid Coleman and Erice / A. Zichichi. Remembering Sidney Coleman / G.'t Hooft -- Predicted signals at LHC. From extra-dimensions: Multiple branes scenarios and their contenders / I. Antoniadis. Predicted signals at the LHC from technicolor / A. Martin. The one-parameter model at LHC / J. Maxin, E. Mayes and D. V. Nanopoulos. How supercritical string cosmology affects LHC / D. V. Nanopoulos. High scale physics connection to LHC data / P. Nath. Predicted signatures at the LHC from U(I) extensions of the standard model / P. Nath -- Hot theoretical topics. Progress on the ultraviolet finiteness of supergravity / Z. Bern. Status of supersymmetry: Foundations and applications / S. Ferrara and A. Marrani. Quantum gravity from dynamical triangulation / R. Loll. Status of superstring and M-theory / J. H. Schwarz. Some effects of instantons in QCD / G.'t Hooft. Crystalline gravity / G.'t Hooft -- QCD problems. Strongly coupled gauge theories / R. Kenway. Strongly interacting matter at high energy density / L. McLerran. Seminars on specialized topics. The nature and the mass of neutrinos. Majorana vs. Dirac / A. Bettini. The anomalous spin distributions in the nucleon / A. Deshpande. Results from PHENIX at RHIC / M. J. Tannenbaum -- Highlights from laboratories. Highlights from RHIC / Y. Akiba. News from the Gran Sasso Underground Laboratory / E. Coccia. Highlights from TRIUMF / N. S. Lockyer. Highlights from Superkamiokande / M. Koshiba. Highlights from Fermilab / P. J. Oddone. Highlights from IHEP / Y. Wang -- Special sessions for new talents. Fake supergravity and black hole evolution / A. Gnecchi. Track-based improvement in the jet transverse momentum resolution for ATLAS / Z. Marshall. Searches for supersymmetric dark matter with XENON / K. Ni. Running of Newton's constant and quantum gravitational effects / D. Reeb.
Tibiletti, Tania; Auroy, Pascaline; Peltier, Gilles; Caffarri, Stefano
2016-08-01
Photosynthetic organisms must respond to excess light in order to avoid photo-oxidative stress. In plants and green algae the fastest response to high light is non-photochemical quenching (NPQ), a process that allows the safe dissipation of the excess energy as heat. This phenomenon is triggered by the low luminal pH generated by photosynthetic electron transport. In vascular plants the main sensor of the low pH is the PsbS protein, while in the green alga Chlamydomonas reinhardtii LhcSR proteins appear to be exclusively responsible for this role. Interestingly, Chlamydomonas also possesses two PsbS genes, but so far the PsbS protein has not been detected and its biological function is unknown. Here, we reinvestigated the kinetics of gene expression and PsbS and LhcSR3 accumulation in Chlamydomonas during high light stress. We found that, unlike LhcSR3, PsbS accumulates very rapidly but only transiently. In order to determine the role of PsbS in NPQ and photoprotection in Chlamydomonas, we generated transplastomic strains expressing the algal or the Arabidopsis psbS gene optimized for plastid expression. Both PsbS proteins showed the ability to increase NPQ in Chlamydomonas wild-type and npq4 (lacking LhcSR3) backgrounds, but no clear photoprotection activity was observed. Quantification of PsbS and LhcSR3 in vivo indicates that PsbS is much less abundant than LhcSR3 during high light stress. Moreover, LhcSR3, unlike PsbS, also accumulates during other stress conditions. The possible role of PsbS in photoprotection is discussed. © 2016 American Society of Plant Biologists. All Rights Reserved.
Tibiletti, Tania; Auroy, Pascaline; Peltier, Gilles; Caffarri, Stefano
2016-01-01
Photosynthetic organisms must respond to excess light in order to avoid photo-oxidative stress. In plants and green algae the fastest response to high light is non-photochemical quenching (NPQ), a process that allows the safe dissipation of the excess energy as heat. This phenomenon is triggered by the low luminal pH generated by photosynthetic electron transport. In vascular plants the main sensor of the low pH is the PsbS protein, while in the green alga Chlamydomonas reinhardtii LhcSR proteins appear to be exclusively responsible for this role. Interestingly, Chlamydomonas also possesses two PsbS genes, but so far the PsbS protein has not been detected and its biological function is unknown. Here, we reinvestigated the kinetics of gene expression and PsbS and LhcSR3 accumulation in Chlamydomonas during high light stress. We found that, unlike LhcSR3, PsbS accumulates very rapidly but only transiently. In order to determine the role of PsbS in NPQ and photoprotection in Chlamydomonas, we generated transplastomic strains expressing the algal or the Arabidopsis psbS gene optimized for plastid expression. Both PsbS proteins showed the ability to increase NPQ in Chlamydomonas wild-type and npq4 (lacking LhcSR3) backgrounds, but no clear photoprotection activity was observed. Quantification of PsbS and LhcSR3 in vivo indicates that PsbS is much less abundant than LhcSR3 during high light stress. Moreover, LhcSR3, unlike PsbS, also accumulates during other stress conditions. The possible role of PsbS in photoprotection is discussed. PMID:27329221
The 11 T dipole for HL-LHC: Status and plan
Savary, F.; Barzi, E.; Bordini, B.; ...
2016-06-01
The upgrade of the Large Hadron Collider (LHC) collimation system includes additional collimators in the LHC lattice. The longitudinal space for these collimators will be created by replacing some of the LHC main dipoles with shorter but stronger dipoles compatible with the LHC lattice and main systems. The project plan comprises the construction of two cryoassemblies containing each of the two 11-T dipoles of 5.5-m length for possible installation on either side of interaction point 2 of LHC in the years 2018-2019 for ion operation, and the installation of two cryoassemblies on either side of interaction point 7 of LHCmore » in the years 2023-2024 for proton operation. The development program conducted in conjunction between the Fermilab and CERN magnet groups is progressing well. The development activities carried out on the side of Fermilab were concluded in the middle of 2015 with the fabrication and test of a 1-m-long two-in-one model and those on the CERN side are ramping up with the construction of 2-m-long models and the preparation of the tooling for the fabrication of the first full-length prototype. The engineering design of the cryomagnet is well advanced, including the definition of the various interfaces, e.g., with the collimator, powering, protection, and vacuum systems. Several practice coils of 5.5-m length have been already fabricated. This paper describes the overall progress of the project, the final design of the cryomagnet, and the performance of the most recent models. Furthermore, the overall plan toward the fabrication of the series magnets for the two phases of the upgrade of the LHC collimation system is also presented.« less
None
2017-12-09
Et si la lumière au bout du tunnel du LHC était cosmique ? En dâautres termes, quâest-ce que le LHC peut nous apporter dans la connaissance de lâUnivers ? Car la montée en énergie des accélérateurs de particules nous permet de mieux appréhender lâunivers primordial, chaud et dense. Mais dans quel sens dit-on que le LHC reproduit des conditions proches du Big bang ? Quelles informations nous apporte-t-il sur le contenu de lâUnivers ? La matière noire est-elle détectable au LHC ? Lâénergie noire ? Pourquoi lâantimatière accumulée au CERN est-elle si rare dans lâUnivers ? Et si le CERN a bâti sa réputation sur lâexploration des forces faibles et fortes qui opèrent au sein des atomes et de leurs noyaux, est-ce que le LHC peut nous apporter des informations sur la force gravitationnelle qui gouverne lâévolution cosmique ? Depuis une trentaine dâannées, notre compréhension de lâunivers dans ses plus grandes dimensions et lâappréhension de son comportement aux plus petites distances sont intimement liées : en quoi le LHC va-t-il tester expérimentalement cette vision unifiée ? Tout public, entrée libre / Réservations au +41 (0)22 767 76 76
Quality of Life Comparing Dor and Toupet After Heller Myotomy for Achalasia
Tomasko, Jonathan M.; Augustin, Toms; Tran, Tung T.; Haluck, Randy S.; Rogers, Ann M.
2014-01-01
Background: Laparoscopic Heller cardiomyotomy (LHC) is standard therapy for achalasia. Traditionally, an antireflux procedure has accompanied the myotomy. This study was undertaken to compare quality-of-life outcomes between patients undergoing myotomy with Toupet versus Dor fundoplication. In addition, we investigated overall patient satisfaction after LHC in the treatment of achalasia. Methods: One hundred thirty-five patients who underwent LHC over a 13-year period were identified for inclusion. Symptoms queried included dysphagia, heartburn, and bloating using the Gastroesophageal Reflux Disease–Health-Related Quality of Life Scale and a second published scale for the assessment of gastroesophageal reflux disease and dysphagia symptoms. The patients' overall satisfaction after surgery was also rated. Data were compared on the basis of type of fundoplication. Symptom scores were analyzed using chi-square tests and Fisher's exact tests. Results: Sixty-three patients completed the survey (47%). There were no perioperative deaths or reoperations. The mean length of stay was 2.8 days. The mean operative time for LHC with Toupet fundoplication was 137.3 ± 30.91 minutes and for LHC with Dor fundoplication was 111.5 ± 32.44 minutes (P = .006). There was no difference with respect to the incidence or severity of postoperative heartburn, dysphagia, or bloating. Overall satisfaction with Toupet fundoplication was 87.5% and with Dor fundoplication was 93.8% (P > .999). Conclusions: LHC with either Toupet or Dor fundoplication gave excellent patient satisfaction. Postoperative symptoms of heartburn and dysphagia were equivalent when comparing LHC with either antireflux procedure. Dor and Toupet fundoplication were found to have equivalent outcomes in the short term. We prefer Dor to Toupet fundoplication because of its decreased need for extensive dissection and better mucosal protection. PMID:25392612
Probing Higgs-radion mixing in warped models through complementary searches at the LHC and the ILC
NASA Astrophysics Data System (ADS)
Frank, Mariana; Huitu, Katri; Maitra, Ushoshi; Patra, Monalisa
2016-09-01
We consider the Higgs-radion mixing in the context of warped space extradimensional models with custodial symmetry and investigate the prospects of detecting the mixed radion. Custodial symmetries allow the Kaluza-Klein excitations to be lighter and protect Z b b ¯ to be in agreement with experimental constraints. We perform a complementary study of discovery reaches of the Higgs-radion mixed state at the 13 and 14 TeV LHC and at the 500 and 1000 GeV International Linear Collider (ILC). We carry out a comprehensive analysis of the most significant production and decay modes of the mixed radion in the 80 GeV-1 TeV mass range and indicate the parameter space that can be probed at the LHC and the ILC. There exists a region of the parameter space which can be probed, at the LHC, through the diphoton channel even for a relatively low luminosity of 50 fb-1 . The reach of the four-lepton final state in probing the parameter space is also studied in the context of 14 TeV LHC, for a luminosity of 1000 fb-1 . At the ILC, with an integrated luminosity of 500 fb-1 , we analyze the Z -radion associated production and the W W fusion production, followed by the radion decay into b b ¯ and W+W-. The W W fusion production is favored over the Z -radion associated channel in probing regions of the parameter space beyond the LHC reach. The complementary study at the LHC and the ILC is useful both for the discovery of the radion and the understanding of its mixing sector.
Final Technical Report for ``Paths to Discovery at the LHC : Dark Matter and Track Triggering"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Kristian
Particle Dark Matter (DM) is perhaps the most compelling and experimentally well-motivated new physics scenario anticipated at the Large Hadron Collider (LHC). The DE-SC0014073 award allowed the PI to define and pursue a path to the discovery of Dark Matter in Run-2 of the LHC with the Compact Muon Solenoid (CMS) experiment. CMS can probe regions of Dark Matter phase-space that direct and indirect detection experiments are unable to constrain. The PI’s team initiated the exploration of these regions, searching specifically for the associated production of Dark Matter with top quarks. The effort focuses on the high-yield, hadronic decays ofmore » W bosons produced in top decay, which provides the highest sensitivity to DM produced via through low-mass spin-0 mediators. The group developed identification algorithms that achieve high efficiency and purity in the selection of hadronic top decays, and analysis techniques that provide powerful signal discrimination in Run-2. The ultimate reach of new physics searches with CMS will be established at the high-luminosity LHC (HL-LHC). To fully realize the sensitivity the HL-LHC promises, CMS must minimize the impact of soft, inelastic (“pileup”) interactions on the real-time “trigger” system the experiment uses for data refinement. Charged particle trajectory information (“tracking”) will be essential for pileup mitigation at the HL-LHC. The award allowed the PI’s team to develop firmware-based data delivery and track fitting algorithms for an unprecedented, real-time tracking trigger to sustain the experiment’s sensitivity to new physics in the next decade.« less
Cognitive Function Before and After Left Heart Catheterization.
Scott, David A; Evered, Lisbeth; Maruff, Paul; MacIsaac, Andrew; Maher, Sarah; Silbert, Brendan S
2018-03-10
Hospital procedures have been associated with cognitive change in older patients. This study aimed to document the prevalence of mild cognitive impairment in individuals undergoing left heart catheterization (LHC) before the procedure and the incidence of cognitive decline to 3 months afterwards. We conducted a prospective, observational, clinical investigation of elderly participants undergoing elective LHC. Cognition was assessed using a battery of written tests and a computerized cognitive battery before the LHC and then at 3 months afterwards. The computerized tests were also administered at 24 hours (or discharge) and 7 days after LHC. A control group of 51 community participants was recruited to calculate cognitive decline using the Reliable Change Index. Of 437 participants, mild cognitive impairment was identified in 226 (51.7%) before the procedure. Computerized tests detected an incidence of cognitive decline of 10.0% at 24 hours and 7.5% at 7 days. At 3 months, written tests detected an incidence of cognitive decline of 13.1% and computerized tests detected an incidence of 8.5%. Cognitive decline at 3 months using written tests was associated with increasing age, whereas computerized tests showed cognitive decline was associated with baseline amnestic mild cognitive impairment, diabetes mellitus, and prior coronary stenting. More than half the patients aged >60 years presenting for LHC have mild cognitive impairment. LHC is followed by cognitive decline in 8% to 13% of individuals at 3 months after the procedure. Subtle cognitive decline both before and after LHC is common and may have important clinical implications. URL: www.anzctr.org.au. Unique identifier: ACTRN12607000051448. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Quality of life comparing dor and toupet after heller myotomy for achalasia.
Tomasko, Jonathan M; Augustin, Toms; Tran, Tung T; Haluck, Randy S; Rogers, Ann M; Lyn-Sue, Jerome R
2014-01-01
Laparoscopic Heller cardiomyotomy (LHC) is standard therapy for achalasia. Traditionally, an antireflux procedure has accompanied the myotomy. This study was undertaken to compare quality-of-life outcomes between patients undergoing myotomy with Toupet versus Dor fundoplication. In addition, we investigated overall patient satisfaction after LHC in the treatment of achalasia. One hundred thirty-five patients who underwent LHC over a 13-year period were identified for inclusion. Symptoms queried included dysphagia, heartburn, and bloating using the Gastroesophageal Reflux Disease-Health-Related Quality of Life Scale and a second published scale for the assessment of gastroesophageal reflux disease and dysphagia symptoms. The patients' overall satisfaction after surgery was also rated. Data were compared on the basis of type of fundoplication. Symptom scores were analyzed using chi-square tests and Fisher's exact tests. Sixty-three patients completed the survey (47%). There were no perioperative deaths or reoperations. The mean length of stay was 2.8 days. The mean operative time for LHC with Toupet fundoplication was 137.3±30.91 minutes and for LHC with Dor fundoplication was 111.5±32.44 minutes (P=.006). There was no difference with respect to the incidence or severity of postoperative heartburn, dysphagia, or bloating. Overall satisfaction with Toupet fundoplication was 87.5% and with Dor fundoplication was 93.8% (P>.999). LHC with either Toupet or Dor fundoplication gave excellent patient satisfaction. Postoperative symptoms of heartburn and dysphagia were equivalent when comparing LHC with either antireflux procedure. Dor and Toupet fundoplication were found to have equivalent outcomes in the short term. We prefer Dor to Toupet fundoplication because of its decreased need for extensive dissection and better mucosal protection.
Takahashi, Kaori; Takabayashi, Atsushi; Tanaka, Ayumi; Tanaka, Ryouichi
2014-01-01
The light-harvesting complex (LHC) constitutes the major light-harvesting antenna of photosynthetic eukaryotes. LHC contains a characteristic sequence motif, termed LHC motif, consisting of 25–30 mostly hydrophobic amino acids. This motif is shared by a number of transmembrane proteins from oxygenic photoautotrophs that are termed light-harvesting-like (LIL) proteins. To gain insights into the functions of LIL proteins and their LHC motifs, we functionally characterized a plant LIL protein, LIL3. This protein has been shown previously to stabilize geranylgeranyl reductase (GGR), a key enzyme in phytol biosynthesis. It is hypothesized that LIL3 functions to anchor GGR to membranes. First, we conjugated the transmembrane domain of LIL3 or that of ascorbate peroxidase to GGR and expressed these chimeric proteins in an Arabidopsis mutant lacking LIL3 protein. As a result, the transgenic plants restored phytol-synthesizing activity. These results indicate that GGR is active as long as it is anchored to membranes, even in the absence of LIL3. Subsequently, we addressed the question why the LHC motif is conserved in the LIL3 sequences. We modified the transmembrane domain of LIL3, which contains the LHC motif, by substituting its conserved amino acids (Glu-171, Asn-174, and Asp-189) with alanine. As a result, the Arabidopsis transgenic plants partly recovered the phytol-biosynthesizing activity. However, in these transgenic plants, the LIL3-GGR complexes were partially dissociated. Collectively, these results indicate that the LHC motif of LIL3 is involved in the complex formation of LIL3 and GGR, which might contribute to the GGR reaction. PMID:24275650
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogt, R.
We exploremore » the effects of shadowing on inclusive J / ψ and Υ ( 1 S ) production at AFTER@LHC. We also present the rates as a function of p T and rapidity for p + Pb and Pb + p collisions in the proposed AFTER@LHC rapidity acceptance.« less
Vogt, R.
2015-01-01
We exploremore » the effects of shadowing on inclusive J / ψ and Υ ( 1 S ) production at AFTER@LHC. We also present the rates as a function of p T and rapidity for p + Pb and Pb + p collisions in the proposed AFTER@LHC rapidity acceptance.« less
The TOTEM DAQ based on the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio
2018-02-01
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.
High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apollinari, G.; Béjar Alonso, I.; Brüning, O.
2015-12-17
The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHCmore » is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.« less
Andersson, Ulrica; Heddad, Mounia; Adamska, Iwona
2003-01-01
The superfamily of light-harvesting chlorophyll a/b-binding (Lhc) proteins in higher plants and green algae is composed of more than 20 different antenna proteins associated either with photosystem I (PSI) or photosystem II (PSII). Several distant relatives of this family with conserved chlorophyll-binding residues and proposed photoprotective functions are induced transiently under various stress conditions. Whereas “classical” Lhc proteins contain three-transmembrane α-helices, their distant relatives span the membrane with between one and four transmembrane segments. Here, we report the identification and isolation of a novel member of the Lhc family from Arabidopsis with one predicted transmembrane α-helix closely related to helix I of Lhc protein from PSI (Lhca4) that we named Ohp2 (for a second one-helix protein of Lhc family described from higher plants). We showed that the Ohp2 gene expression is triggered by light stress and that the Ohp2 transcript and protein accumulated in a light intensity-dependent manner. Other stress conditions did not up-regulate the expression of the Ohp2 gene. Localization studies revealed that Ohp2 is associated with PSI under low- or high-light conditions. Because all stress-induced Lhc relatives reported so far were found in PSII, we propose that the accumulation of Ohp2 might represent a novel photoprotective strategy induced within PSI in response to light stress. PMID:12805611
An Overview of the Needs of Technology in Language Testing in Spain
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal Royo, Teresa; Barcena Madera, Elena
2015-01-01
Over the few years, computer based language testing has become prevailing worldwide. The number of institutions the use computers as the main means of delivery has increased dramatically. Many students face each day tests for well-known high-stakes decisions which imply the knowledge and ability to use technology to provide evidence of language…
Cultural and Global Linkages of Emotional Support through Online Support Groups.
ERIC Educational Resources Information Center
Gary, Juneau Mahan
Computer technology is altering the way people cope with emotional distress. Computers enable people worldwide and from all cultural groups to give and receive emotional support when it may be culturally stigmatizing to seek face-to-face support or when support services are limited or non-existent. Online support groups attract a broad range of…
ERIC Educational Resources Information Center
Carapina, Mia; Boticki, Ivica
2015-01-01
This paper analyses mobile computer supported collaborative learning in elementary education worldwide focusing on technology trends for the period from 2009 to 2014. The results present representation of device types used to support collaborative activities, their distribution per users (1:1 or 1:m) and if students are learning through or around…
2005-03-01
computing equipment, the idea of computer security has also become embedded in our society. Ever since the Michelangelo virus of 1992, when...Bibliography TheWorldwide Michelangelo Virus Scare of 1992. Retrieved February 2, 2004 from http://www.vmyths.com/fas/fas_inc/inc1.cfm Allen, J
ERIC Educational Resources Information Center
Chebrolu, Shankar Babu
2010-01-01
Against the backdrop of new economic realities, one of the larger forces that is affecting businesses worldwide is cloud computing, whose benefits include agility, time to market, time to capability, reduced cost, renewed focus on the core and strategic partnership with the business. Cloud computing can potentially transform a majority of the…
Gender Equity in Advertising on the World-Wide Web: Can it be Found?
ERIC Educational Resources Information Center
Kramer, Kevin M.; Knupfer, Nancy Nelson
Recent attention to gender equity in computer environments, as well as in print-based and televised advertising for technological products, suggests that gender bias in the computer environment continues. This study examined gender messages within World Wide Web advertisements, specifically the type and number of visual images used in Web banner…
The ALICE Experiment at CERN Lhc:. Status and First Results
NASA Astrophysics Data System (ADS)
Vercellin, Ermanno
The ALICE experiment is aimed at studying the properties of the hot and dense matter produced in heavy-ion collisions at LHC energies. In the first years of LHC operation the ALICE physics program will be focused on Pb-Pb and p-p collisions. The latter, on top of their intrinsic interest, will provide the necessary baseline for heavy-ion data. After its installation and a long commissioning with cosmic rays, in late fall 2009 ALICE participated (very successfully) in the first LHC run, by collecting data in p-p collisions at c.m. energy 900 GeV. After a short stop during winter, LHC operations have been resumed; the machine is now able to accelerate proton beams up to 3.5 TeV and ALICE has undertaken the data taking campaign at 7 TeV c.m. energy. After an overview of the ALICE physics goals and a short description of the detector layout, the ALICE performance in p-p collisions will be presented. The main physics results achieved so far will be highlighted as well as the main aspects of the ongoing data analysis.
Probing U(1) extensions of the MSSM at the LHC Run I and in dark matter searches
NASA Astrophysics Data System (ADS)
Bélanger, G.; Da Silva, J.; Laa, U.; Pukhov, A.
2015-09-01
The U(1) extended supersymmetric standard model (UMSSM) can accommodate a Higgs boson at 125 GeV without relying on large corrections from the top/stop sector. After imposing LHC results on the Higgs sector, on B-physics and on new particle searches as well as dark matter constraints, we show that this model offers two viable dark matter candidates, the right-handed (RH) sneutrino or the neutralino. Limits on super-symmetric partners from LHC simplified model searches are imposed using SM odelS and allow for light squarks and gluinos. Moreover the upper limit on the relic abundance often favours scenarios with long-lived particles. Searches for a Z ' at the LHC remain the most unambiguous probes of this model. Interestingly, the D-term contributions to the sfermion masses allow to explain the anomalous magnetic moment of the muon in specific corners of the parameter space with light smuons or left-handed (LH) sneutrinos. We finally emphasize the interplay between direct searches for dark matter and LHC simplified model searches.
Testing the Muon g-2 Anomaly at the LHC
Freitas, Ayres; Lykken, Joseph; Kell, Stefan; ...
2014-05-29
The long-standing difference between the experimental measurement and the standard-model prediction for the muon's anomalous magnetic moment,more » $$a_{\\mu} = (g_{\\mu}-2)/2$$, may be explained by the presence of new weakly interacting particles with masses of a few 100 GeV. Particles of this kind can generally be directly produced at the LHC, and thus they may already be constrained by existing data. In this work, we investigate this connection between $$a_{\\mu}$$ and the LHC in a model-independent approach, by introducing one or two new fields beyond the standard model with spin and weak isospin up to one. For each case, we identify the preferred parameter space for explaining the discrepancy of a_mu and derive bounds using data from LEP and the 8-TeV LHC run. Furthermore, we estimate how these limits could be improved with the 14-TeV LHC. We find that the 8-TeV results already rule out a subset of our simplified models, while almost all viable scenarios can be tested conclusively with 14-TeV data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoynev, S.; et al.
The development ofmore » $$Nb_3Sn$$ quadrupole magnets for the High-Luminosity LHC upgrade is a joint venture between the US LHC Accelerator Research Program (LARP)* and CERN with the goal of fabricating large aperture quadrupoles for the LHC in-teraction regions (IR). The inner triplet (low-β) NbTi quadrupoles in the IR will be replaced by the stronger Nb3Sn magnets boosting the LHC program of having 10-fold increase in integrated luminos-ity after the foreseen upgrades. Previously LARP conducted suc-cessful tests of short and long models with up to 120 mm aperture. The first short 150 mm aperture quadrupole model MQXFS1 was assembled with coils fabricated by both CERN and LARP. The magnet demonstrated strong performance at the Fermilab’s verti-cal magnet test facility reaching the LHC operating limits. This paper reports the latest results from MQXFS1 tests with changed pre-stress levels. The overall magnet performance, including quench training and memory, ramp rate and temperature depend-ence, is also summarized.« less
NASA Astrophysics Data System (ADS)
Young, Andrew J.; Phillip, Denise M.; Hashimoto, Hideki
2002-12-01
The binding of xanthophylls to the main light-harvesting complex (LHC) of higher plants has been studied using the technique of in vitro reconstitution. This demonstrated that the carotenoid diol lactucaxanthin (native to many LHC) would not support the assembly of LHC whilst other diols, notably zeaxanthin and lutein would. Analysis of the most stable forms of the carotenoid end-groups found in xanthophylls native to higher plant LHC (as determined by theoretical calculations) revealed profound differences in the adiabatic potential energy curves for the C5-C6-C7-C8-torsion angle for the ɛ end-groups in lactucaxanthin (6-s- trans), in comparison to carotenoids possessing a 3-hydroxy β end-group (zeaxanthin; 6-s- cis), 3-hydroxy-4-keto β end-group (astaxanthin, 6-s- cis) or a 3-hydroxy-5,6-epoxy end-group (violaxanthin, distorted 6-s- cis). The (ɛ end-groups of other carotenoids studied were 6-s- trans. We examine the possible relationship between carotenoid ring-to-chain conformation and binding to LHC.
Modeling radiation damage to pixel sensors in the ATLAS detector
NASA Astrophysics Data System (ADS)
Ducourthial, A.
2018-03-01
Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS detector at the Large Hadron Collider (LHC) . As the closest detector component to the interaction point, these detectors will be subject to a significant amount of radiation over their lifetime: prior to the High-Luminosity LHC (HL-LHC) [1], the innermost layers will receive a fluence in excess of 1015 neq/cm2 and the HL-LHC detector upgrades must cope with an order of magnitude higher fluence integrated over their lifetimes. Simulating radiation damage is essential in order to make accurate predictions for current and future detector performance that will enable searches for new particles and forces as well as precision measurements of Standard Model particles such as the Higgs boson. We present a digitization model that includes radiation damage effects on the ATLAS pixel sensors for the first time. In addition to thoroughly describing the setup, we present first predictions for basic pixel cluster properties alongside early studies with LHC Run 2 proton-proton collision data.
Exotic lepton searches via bound state production at the LHC
NASA Astrophysics Data System (ADS)
Barrie, Neil D.; Kobakhidze, Archil; Liang, Shelley; Talia, Matthew; Wu, Lei
2018-06-01
Heavy long-lived multi-charged leptons (MCLs) are predicted by various new physics models. These hypothetical MCLs can form bound states, due to their high electric charges and long life times. In this work, we propose a novel strategy of searching for MCLs through their bound state productions and decays. By utilising LHC-8 TeV data in searching for resonances in the diphoton channel, we exclude the masses of isospin singlet heavy leptons with electric charge | q | ≥ 6 (in units of electron charge) lower than ∼1.2 TeV, which are much stronger than the corresponding 8 TeV LHC bounds from analysing the high ionisation and the long time-of-flight of MCLs. By utilising the current 13 TeV LHC diphoton channel measurements the bound can further exclude MCL masses up to ∼1.6 TeV for | q | ≥ 6. Also, we demonstrate that the conventional LHC limits from searching for MCLs produced via Drell-Yan processes can be enhanced by including the contribution of photon fusion processes.
STRONTIUM 90: ESTIMATION OF WORLDWIDE DEPOSITION.
VOLCHOK, H L
1964-09-25
The relation between the worldwide deposition of strontium-90, as calculated by many investigators over the last decade, and that observed in rainfall in New York City has been relatively constant. On the average, for each millicurie of strontium-90 per square mile deposited in New York City, 0.055 megacurie has been deposited on the earth's total surface. Cumulative deposits of strontium-90 on the earth's surface at various intervals over the last 10 years have been computed from this ratio. From the mean quarterly fraction of the annual strontium-90 fallout in New York City for the last 9 years, the worldwide deposition of this nuclide, equal to 2.48 megacuries, is predicted for 1964.
Total Top-Quark Pair-Production Cross Section at Hadron Colliders Through O(αS4)
NASA Astrophysics Data System (ADS)
Czakon, Michał; Fiedler, Paul; Mitov, Alexander
2013-06-01
We compute the next-to-next-to-leading order (NNLO) quantum chromodynamics (QCD) correction to the total cross section for the reaction gg→tt¯+X. Together with the partonic channels we computed previously, the result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders. Supplementing the fixed order results with soft-gluon resummation with next-to-next-to-leading logarithmic accuracy, we estimate that the theoretical uncertainty of this observable due to unknown higher order corrections is about 3% at the LHC and 2.2% at the Tevatron. We observe a good agreement between the standard model predictions and the available experimental measurements. The very high theoretical precision of this observable allows a new level of scrutiny in parton distribution functions and new physics searches.
Total top-quark pair-production cross section at hadron colliders through O(αS(4)).
Czakon, Michał; Fiedler, Paul; Mitov, Alexander
2013-06-21
We compute the next-to-next-to-leading order (NNLO) quantum chromodynamics (QCD) correction to the total cross section for the reaction gg → tt + X. Together with the partonic channels we computed previously, the result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders. Supplementing the fixed order results with soft-gluon resummation with next-to-next-to-leading logarithmic accuracy, we estimate that the theoretical uncertainty of this observable due to unknown higher order corrections is about 3% at the LHC and 2.2% at the Tevatron. We observe a good agreement between the standard model predictions and the available experimental measurements. The very high theoretical precision of this observable allows a new level of scrutiny in parton distribution functions and new physics searches.
aMC fast: automation of fast NLO computations for PDF fits
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Frederix, Rikkert; Frixione, Stefano; Rojo, Juan; Sutton, Mark
2014-08-01
We present the interface between M adG raph5_ aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPL grid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMC fast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix-element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.
NASA Astrophysics Data System (ADS)
Delle Fratte, C.; Kennedy, J. A.; Kluth, S.; Mazzaferro, L.
2015-12-01
In a grid computing infrastructure tasks such as continuous upgrades, services installations and software deployments are part of an admins daily work. In such an environment tools to help with the management, provisioning and monitoring of the deployed systems and services have become crucial. As experiments such as the LHC increase in scale, the computing infrastructure also becomes larger and more complex. Moreover, today's admins increasingly work within teams that share responsibilities and tasks. Such a scaled up situation requires tools that not only simplify the workload on administrators but also enable them to work seamlessly in teams. In this paper will be presented our experience from managing the Max Planck Institute Tier2 using Puppet and Gitolite in a cooperative way to help the system administrator in their daily work. In addition to describing the Puppet-Gitolite system, best practices and customizations will also be shown.
Bruning, Oliver
2018-05-23
Overview of the operation and upgrade plans for the machine. Upgrade studies and taskforces. The Chamonix 2010 discussions led to five new task forces: planning for a long shut down in 2012 for splice consolidation; long term consolidation planning for the injector complex; SPS upgrade task force (accelerated program for SPS upgrade); PSB upgrade and its implications for the PS (e.g. radiation etc.); LHC High Luminosity project (investigate planning for ONE upgrade by 2018-2020); Launch of a dedicated study for doubling the beam energy in the LHC->HE-LHC.
Black Holes and the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Roy, Arunava
2011-12-01
The European Center for Nuclear Research or CERN's Large Hadron Collider (LHC) has caught our attention partly due to the film ``Angels and Demons.'' In the movie, an antimatter bomb attack on the Vatican is foiled by the protagonist. Perhaps just as controversial is the formation of mini black holes (BHs). Recently, the American Physical Society1 website featured an article on BH formation at the LHC.2 This article examines some aspects of mini BHs and explores the possibility of their detection at the LHC.
BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.
2007-06-25
The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.
Charged-particle multiplicity at LHC energies
Grosse-Oetringhaus, Jan Fiete
2018-05-24
The talk presents the measurement of the pseudorapidity density and the multiplicity distribution with ALICE at the achieved LHC energies of 0.9 and 2.36 TeV.An overview about multiplicity measurements prior to LHC is given and the related theoretical concepts are briefly discussed.The analysis procedure is presented and the systematic uncertainties are detailed. The applied acceptance corrections and the treatment of diffraction are discussed.The results are compared with model predictions. The validity of KNO scaling in restricted phase space regions is revisited.Â
NASA Astrophysics Data System (ADS)
Nayak, Gouranga C.
2017-12-01
Recently we have proved the factorization of NRQCD S-wave heavy quarkonium production at all orders in coupling constant. In this paper we extend this to prove the factorization of infrared divergences in χ _{cJ} production from color singlet c{\\bar{c}} pair in non-equilibrium QCD at RHIC and LHC at all orders in coupling constant. This can be relevant to study the quark-gluon plasma at RHIC and LHC.
Uncertainties on exclusive diffractive Higgs boson and jet production at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechambre, A.; CEA/IRFU/Service de physique des particules, CEA/Saclay; Kepka, O.
2011-03-01
Two theoretical descriptions of exclusive diffractive jets and Higgs production at the LHC were implemented into the FPMC generator: the Khoze, Martin, Ryskin model and the Cudell, Hernandez, Ivanov, Dechambre exclusive model. We then study the uncertainties. We compare their predictions to the CDF measurement and discuss the possibility of constraining the exclusive Higgs production at the LHC with early measurements of exclusive jets. We show that the present theoretical uncertainties can be reduced with such data by a factor of 5.
Challenges and Plans for the Proton Injectors
NASA Astrophysics Data System (ADS)
Garoby, R.
The flexibility of the LHC injectors combined with multiple longitudinal beam gymnastics have significantly contributed to the excellent performance of the LHC during its first run, delivering beam with twice the ultimate brightness with 50 ns bunch spacing. To meet the requirements of the High Luminosity LHC, 25 ns bunch spacing is required, the intensity per bunch at injection has to double and brightness shall almost triple. Extensive hardware modifications or additions are therefore necessary in all accelerators of the injector complex, as well as new beam gymnastics.
Modelling and measurements of bunch profiles at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papadopoulou, S.; Antoniou, F.; Argyropoulos, T.
The bunch profiles in the LHC are often observed to be non-Gaussian, both at Flat Bottom (FB) and Flat Top (FT) energies. Especially at FT, an evolution of the tail population in time is observed. In this respect, the Monte-Carlo Software for IBS and Radiation effects (SIRE) is used to track different types of beam distributions. The impact of the distribution shape on the evolution of bunch characteristics is studied. The results are compared with observations from the LHC Run 2 data.
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems ismore » expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.« less
Spark and HPC for High Energy Physics Data Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc
A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be representedmore » and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.« less
A phenomenological study on the production of Higgs bosons in the cSMCS model at the LHC
NASA Astrophysics Data System (ADS)
Darvishi, N.; Masouminia, M. R.
2017-10-01
In the present work, we intend to predict the production rates of the Higgs bosons in the simplest extension of the Standard Model (SM) by a neutral complex singlet (cSMCS). This model has an additional source of CP violation and provides strong enough first-order electroweak phase transition to generate the baryon asymmetry of universe (BAU). The scalar spectrum of the cSMCS includes three neutral Higgs particles with the lightest one considered to be the 125 GeV Higgs boson found at LHC. The SM-like Higgs boson comes mostly from the SM-like SU(2) doublet, with a small correction from the singlet. To predict the production rates of the Higgs bosons, we use a conventional effective LO QCD framework and the unintegrated parton distribution functions (UPDF) of Kimber-Martin-Ryskin (KMR). We first compute the SM Higgs production cross-section and compare the results to the existing theoretical calculations from different frameworks as well as the experimental data from the CMS and ATLAS collaborations. It is shown that our framework is capable of producing sound predictions for these high-energy QCD events in the SM. Afterwards we present our predictions for the Higgs boson production in the cSMCS.
Electrical properties study under radiation of the 3D-open-shell-electrode detector
NASA Astrophysics Data System (ADS)
Liu, Manwen; Li, Zheng
2018-05-01
Since the 3D-Open-Shell-Electrode Detector (3DOSED) is proposed and the structure is optimized, it is important to study 3DOSED's electrical properties to determine the detector's working performance, especially in the heavy radiation environments, like the Large Hadron Collider (LHC) and it's upgrade, the High Luminosity (HL-LHC) at CERN. In this work, full 3D technology computer-aided design (TCAD) simulations have been done on this novel silicon detector structure. Simulated detector properties include the electric field distribution, the electric potential distribution, current-voltage (I-V) characteristics, capacitance-voltage (C-V) characteristics, charge collection property, and full depletion voltage. Through the analysis of calculations and simulation results, we find that the 3DOSED's electric field and potential distributions are very uniform, even in the tiny region near the shell openings with little perturbations. The novel detector fits the designing purpose of collecting charges generated by particle/light in a good fashion with a well defined funnel shape of electric potential distribution that makes these charges drifting towards the center collection electrode. Furthermore, by analyzing the I-V, C-V, charge collection property and full depletion voltage, we can expect that the novel detector will perform well, even in the heavy radiation environments.
ALICE HLT Run 2 performance overview.
NASA Astrophysics Data System (ADS)
Krzewicki, Mikolaj; Lindenstruth, Volker;
2017-10-01
For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.
Achieving production-level use of HEP software at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.
2015-12-01
HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.
Graphics Processors in HEP Low-Level Trigger Systems
NASA Astrophysics Data System (ADS)
Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Cotta Ramusino, Angelo; Cretaro, Paolo; Di Lorenzo, Stefano; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero
2016-11-01
Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup.
Managing a tier-2 computer centre with a private cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara
2014-06-01
In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.
Z -Boson Production in Association with a Jet at Next-To-Next-To-Leading Order in Perturbative QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Campbell, John; Ellis, R. Keith
2016-04-01
We present the first complete calculation of Z-boson production in association with a jet in hadronic collisions through next-to-next-to-leading order in perturbative QCD. Our computation uses the recently proposed N-jettiness subtraction scheme to regulate the infrared divergences that appear in the real-emission contributions. We present phenomenological results for 13 TeV proton-proton collisions with fully realistic fiducial cuts on the final-state particles. The remaining theoretical uncertainties after the inclusion of our calculations are at the percent level, making the Z + jet channel ready for precision studies at the LHC run II.
Evolution of user analysis on the grid in ATLAS
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.; ATLAS Collaboration
2017-10-01
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
Recent developments in user-job management with Ganga
NASA Astrophysics Data System (ADS)
Currie, R.; Elmsheuser, J.; Fay, R.; Owen, P. H.; Richards, A.; Slater, M.; Sutcliffe, W.; Williams, M.
2015-12-01
The Ganga project was originally developed for use by LHC experiments and has been used extensively throughout Run1 in both LHCb and ATLAS. This document describes some the most recent developments within the Ganga project. There have been improvements in the handling of large scale computational tasks in the form of a new GangaTasks infrastructure. Improvements in file handling through using a new IGangaFile interface makes handling files largely transparent to the end user. In addition to this the performance and usability of Ganga have both been addressed through the development of a new queues system allows for parallel processing of job related tasks.
The complete NLO corrections to dijet hadroproduction
NASA Astrophysics Data System (ADS)
Frederix, R.; Frixione, S.; Hirschi, V.; Pagani, D.; Shao, H.-S.; Zaro, M.
2017-04-01
We study the production of jets in hadronic collisions, by computing all contributions proportional to α S n α m , with n + m = 2 and n + m = 3. These correspond to leading and next-to-leading order results, respectively, for single-inclusive and dijet observables in a perturbative expansion that includes both QCD and electroweak effects. We discuss issues relevant to the definition of hadronic jets in the context of electroweak corrections, and present sample phenomenological predictions for the 13-TeV LHC. We find that both the leading and next-to-leading order contributions largely respect the relative hierarchy established by the respective coupling-constant combinations.
The complete NLO corrections to dijet hadroproduction
Frederix, R.; Frixione, S.; Hirschi, V.; ...
2017-04-12
We study the production of jets in hadronic collisions, by computing all contributions proportional to αS nα m, with n + m = 2 and n + m = 3. These correspond to leading and next-to-leading order results, respectively, for single-inclusive and dijet observables in a perturbative expansion that includes both QCD and electroweak effects. We discuss issues relevant to the definition of hadronic jets in the context of electroweak corrections, and present sample phenomenological predictions for the 13-TeV LHC. We find that both the leading and next-to-leading order contributions largely respect the relative hierarchy established by the respective coupling-constantmore » combinations.« less
The GeantV project: Preparing the future of simulation
Amadio, G.; J. Apostolakis; Bandieramonte, M.; ...
2015-12-23
Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements. Since the increase in computing resources is not likely to keep pace with our needs, it is therefore necessary to explore innovative ways of speeding up simulation in order to sustain the progress of High Energymore » Physics. The GeantV project aims at developing a high performance detector simulation system integrating fast and full simulation that can be ported on different computing architectures, including CPU accelerators. After more than two years of R&D the project has produced a prototype capable of transporting particles in complex geometries exploiting micro-parallelism, SIMD and multithreading. Portability is obtained via C++ template techniques that allow the development of machine- independent computational kernels. Furthermore, a set of tables derived from Geant4 for cross sections and final states provides a realistic shower development and, having been ported into a Geant4 physics list, can be used as a basis for a direct performance comparison.« less
The long journey to the Higgs boson and beyond at the LHC: Emphasis on ATLAS
NASA Astrophysics Data System (ADS)
Jenni, Peter
2016-09-01
The journey in search for the Higgs boson with the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN started more than two decades ago. But the first discussions motivating the LHC project dream date back even further into the 1980s. This article will recall some of these early historical considerations, mention some of the LHC machine milestones and achievements, focus as an example of a technological challenge on the unique ATLAS superconducting magnet system, and then give an account of the physics results so far, leading to, and featuring particularly, the Higgs boson results, and sketching finally prospects for the future. With its emphasis on the ATLAS experiment it is complementary to the preceding article by Tejinder S. Virdee which focused on the CMS experiment.
The Long Journey to the Higgs Boson and Beyond at the LHC Part II: Emphasis on ATLAS
NASA Astrophysics Data System (ADS)
Jenni, Peter
The journey in search for the Higgs boson with the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN started more than two decades ago. But the first discussions motivating the LHC project dream date back even further into the 1980s. This article will recall some of these early historical considerations, mention some of the LHC machine milestones and achievements, focus as an example of a technological challenge on the unique ATLAS superconducting magnet system, and then give an account of the physics results so far, leading to, and featuring particularly, the Higgs boson results, and sketching finally prospects for the future. With its emphasis on the ATLAS experiment it is complementary to the preceding article by Tejinder S. Virdee which focused on the CMS experiment.
Slepton pair production at the LHC in NLO+NLL with resummation-improved parton densities
NASA Astrophysics Data System (ADS)
Fiaschi, Juri; Klasen, Michael
2018-03-01
Novel PDFs taking into account resummation-improved matrix elements, albeit only in the fit of a reduced data set, allow for consistent NLO+NLL calculations of slepton pair production at the LHC. We apply a factorisation method to this process that minimises the effect of the data set reduction, avoids the problem of outlier replicas in the NNPDF method for PDF uncertainties and preserves the reduction of the scale uncertainty. For Run II of the LHC, left-handed selectron/smuon, right-handed and maximally mixed stau production, we confirm that the consistent use of threshold-improved PDFs partially compensates the resummation contributions in the matrix elements. Together with the reduction of the scale uncertainty at NLO+NLL, the described method further increases the reliability of slepton pair production cross sections at the LHC.
The Lhc Collider:. Status and Outlook to Operation
NASA Astrophysics Data System (ADS)
Schmidt, Rüdiger
2006-04-01
For the LHC to provide particle physics with proton-proton collisions at the centre of mass energy of 14 TeV with a luminosity of 1034 cm-2s-1, the machine will operate with high-field dipole magnets using NbTi superconductors cooled to below the lambda point of helium. In order to reach design performance, the LHC requires both, the use of existing technologies pushed to the limits as well as the application of novel technologies. The construction follows a decade of intensive R&D and technical validation of major collider sub-systems. This paper will focus on the required LHC performance, and on the implications on the used technologies. The consequences of the unprecedented quantity of energy stored in both magnets and beams will be discussed. A brief outlook to operation and its consequences for machine protection will be given.
The LHCb trigger and its upgrade
NASA Astrophysics Data System (ADS)
Dziurda, A.; LHCb Trigger Group
2016-07-01
The current LHCb trigger system consists of a hardware level, which reduces the LHC inelastic collision rate of 30 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20 k parallel-processing CPUs, the event rate is reduced to about 5 kHz. We review the performance of the LHCb trigger system during Run I of the LHC. Special attention is given to the use of multivariate analyses in the High Level Trigger. The major bottleneck for hadronic decays is the hardware trigger. LHCb plans a major upgrade of the detector and DAQ system in the LHC shutdown of 2018, enabling a purely software based trigger to process the full 30 MHz of inelastic collisions delivered by the LHC. We demonstrate that the planned architecture will be able to meet this challenge.
NASA Astrophysics Data System (ADS)
Senkin, Sergey
2018-01-01
The ATLAS Collaboration has started a vast programme of upgrades in the context of high-luminosity LHC (HL-LHC) foreseen in 2024. We present here one of the frontend readout options, an ASIC called FATALIC, proposed for the high-luminosity phase LHC upgrade of the ATLAS Tile Calorimeter. Based on a 130 nm CMOS technology, FATALIC performs the complete signal processing, including amplification, shaping and digitisation. We describe the full characterisation of FATALIC and also the Optimal Filtering signal reconstruction method adapted to fully exploit the FATALIC three-range layout. Additionally we present the resolution performance of the whole chain measured using the charge injection system designed for calibration. Finally we discuss the results of the signal reconstruction used on real data collected during a preliminary beam test at CERN.