Michael Ernst
2017-12-09
As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Fine grained event processing on HPCs with the ATLAS Yoda system
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre
2015-12-01
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
Klimentov, A.; Buncic, P.; De, K.; ...
2015-05-22
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimentov, A.; Buncic, P.; De, K.
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
Evolution of user analysis on the grid in ATLAS
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.; ATLAS Collaboration
2017-10-01
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
Operating Dedicated Data Centers - Is It Cost-Effective?
NASA Astrophysics Data System (ADS)
Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.
2014-06-01
The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
Argonne Physics Division - ATLAS
Strategic Plan (2014) ATLAS Gus Savard Guy Savard, Director of ATLAS Welcome to ATLAS, the Argonne Tandem users. ATLAS mission statement and strategic plan guide the operation of the facility. The strategic plan defines the facilities main goals and is aligned with the US Nuclear Physics long-range plan
Computational and mathematical methods in brain atlasing.
Nowinski, Wieslaw L
2017-12-01
Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
ATLAS Distributed Computing Experience and Performance During the LHC Run-2
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.
ATLAS tile calorimeter cesium calibration control and analysis software
NASA Astrophysics Data System (ADS)
Solovyanov, O.; Solodkov, A.; Starchenko, E.; Karyukhin, A.; Isaev, A.; Shalanda, N.
2008-07-01
An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.
The OSG open facility: A sharing ecosystem
Jayatilaka, B.; Levshina, T.; Rynge, M.; ...
2015-12-23
The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
AGIS: The ATLAS Grid Information System
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration
2014-06-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
Juno at the Vertical Integration Facility
2011-08-03
At Space Launch Complex 41, the Juno spacecraft, enclosed in an Atlas payload fairing, was transferred into the Vertical Integration Facility where it was positioned on top of the Atlas rocket stacked inside.
AGIS: The ATLAS Grid Information System
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander
2012-12-01
ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.
AGIS: Evolution of Distributed Computing information system for ATLAS
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.
2015-12-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
Oklahoma Center for High Energy Physics (OCHEP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, S; Strauss, M J; Snow, J
2012-02-29
The DOE EPSCoR implementation grant, with the support from the State of Oklahoma and from the three universities, Oklahoma State University, University of Oklahoma and Langston University, resulted in establishing of the Oklahoma Center for High Energy Physics (OCHEP) in 2004. Currently, OCHEP continues to flourish as a vibrant hub for research in experimental and theoretical particle physics and an educational center in the State of Oklahoma. All goals of the original proposal were successfully accomplished. These include foun- dation of a new experimental particle physics group at OSU, the establishment of a Tier 2 computing facility for the Largemore » Hadron Collider (LHC) and Tevatron data analysis at OU and organization of a vital particle physics research center in Oklahoma based on resources of the three universities. OSU has hired two tenure-track faculty members with initial support from the grant funds. Now both positions are supported through OSU budget. This new HEP Experimental Group at OSU has established itself as a full member of the Fermilab D0 Collaboration and LHC ATLAS Experiment and has secured external funds from the DOE and the NSF. These funds currently support 2 graduate students, 1 postdoctoral fellow, and 1 part-time engineer. The grant initiated creation of a Tier 2 computing facility at OU as part of the Southwest Tier 2 facility, and a permanent Research Scientist was hired at OU to maintain and run the facility. Permanent support for this position has now been provided through the OU university budget. OCHEP represents a successful model of cooperation of several universities, providing the establishment of critical mass of manpower, computing and hardware resources. This led to increasing Oklahoma's impact in all areas of HEP, theory, experiment, and computation. The Center personnel are involved in cutting edge research in experimental, theoretical, and computational aspects of High Energy Physics with the research areas ranging from the search for new phenomena at the Fermilab Tevatron and the CERN Large Hadron Collider to theoretical modeling, computer simulation, detector development and testing, and physics analysis. OCHEP faculty members participating on the D0 collaboration at the Fermilab Tevatron and on the ATLAS collaboration at the CERN LHC have made major impact on the Standard Model (SM) Higgs boson search, top quark studies, B physics studies, and measurements of Quantum Chromodynamics (QCD) phenomena. The OCHEP Grid computing facility consists of a large computer cluster which is playing a major role in data analysis and Monte Carlo productions for both the D0 and ATLAS experiments. Theoretical efforts are devoted to new ideas in Higgs bosons physics, extra dimensions, neutrino masses and oscillations, Grand Unified Theories, supersymmetric models, dark matter, and nonperturbative quantum field theory. Theory members are making major contributions to the understanding of phenomena being explored at the Tevatron and the LHC. They have proposed new models for Higgs bosons, and have suggested new signals for extra dimensions, and for the search of supersymmetric particles. During the seven year period when OCHEP was partially funded through the DOE EPSCoR implementation grant, OCHEP members published over 500 refereed journal articles and made over 200 invited presentations at major conferences. The Center is also involved in education and outreach activities by offering summer research programs for high school teachers and college students, and organizing summer workshops for high school teachers, sometimes coordinating with the Quarknet programs at OSU and OU. The details of the Center can be found in http://ochep.phy.okstate.edu.« less
Mixing HTC and HPC Workloads with HTCondor and Slurm
NASA Astrophysics Data System (ADS)
Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.
2017-10-01
Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.
The informatics of a C57BL/6J mouse brain atlas.
MacKenzie-Graham, Allan; Jones, Eagle S; Shattuck, David W; Dinov, Ivo D; Bota, Mihail; Toga, Arthur W
2003-01-01
The Mouse Atlas Project (MAP) aims to produce a framework for organizing and analyzing the large volumes of neuroscientific data produced by the proliferation of genetically modified animals. Atlases provide an invaluable aid in understanding the impact of genetic manipulations by providing a standard for comparison. We use a digital atlas as the hub of an informatics network, correlating imaging data, such as structural imaging and histology, with text-based data, such as nomenclature, connections, and references. We generated brain volumes using magnetic resonance microscopy (MRM), classical histology, and immunohistochemistry, and registered them into a common and defined coordinate system. Specially designed viewers were developed in order to visualize multiple datasets simultaneously and to coordinate between textual and image data. Researchers can navigate through the brain interchangeably, in either a text-based or image-based representation that automatically updates information as they move. The atlas also allows the independent entry of other types of data, the facile retrieval of information, and the straight-forward display of images. In conjunction with centralized servers, image and text data can be kept current and can decrease the burden on individual researchers' computers. A comprehensive framework that encompasses many forms of information in the context of anatomic imaging holds tremendous promise for producing new insights. The atlas and associated tools can be found at http://www.loni.ucla.edu/MAP.
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration
2014-06-01
The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.
High-Performance Scalable Information Service for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Kolos, S.; Boutsioukis, G.; Hauser, R.
2012-12-01
The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information item in IS has an associated URL which can be used to access that item online via HTTP protocol. This functionality is being used by many online monitoring applications which can run in a WEB browser, providing real-time monitoring information about the ATLAS experiment over the globe. This paper describes the design and implementation of the IS and presents performance results which have been taken in the ATLAS operational environment.
Automating ATLAS Computing Operations using the Site Status Board
NASA Astrophysics Data System (ADS)
J, Andreeva; Iglesias C, Borrego; S, Campana; Girolamo A, Di; I, Dzhunov; Curull X, Espinal; S, Gayazov; E, Magradze; M, Nowotka M.; L, Rinaldi; P, Saiz; J, Schovancova; A, Stewart G.; M, Wright
2012-12-01
The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses the SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. The ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The paper will describe how the SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in the SSB. It will demonstrate the positive impact of the use of the SSB on the overall performance of ATLAS computing activities and will overview future plans.
ATLAS@Home: Harnessing Volunteer Computing for HEP
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration
2015-12-01
A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, X. G.; Kim, Y. S.; Choi, K. Y.
2012-07-01
A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less
Workflow Management Systems for Molecular Dynamics on Leadership Computers
NASA Astrophysics Data System (ADS)
Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu
Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.
Locations and attributes of wind turbines in Colorado, 2009
Carr, Natasha B.; Diffendorfer, Jay E.; Fancher, Tammy S.; Latysh, Natalie E.; Leib, Kenneth J.; Matherne, Anne-Marie; Turner, Christine
2011-01-01
The Colorado wind-turbine data series provides geospatial data for all wind turbines established within the State as of August 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, and county. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, year the facility went online, and development status of wind facility. Turbine locations were derived from August 2009 1-meter true-color aerial photographs produced by the National Agriculture Imagery Program; the photographs have a positional accuracy of about + or - 5 meters. The location of turbines under construction during August 2009 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas currently (2011) in development by the U.S. Geological Survey. The Energy Atlas will synthesize data on existing and potential energy development in Colorado and New Mexico and will include additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools will be included in the Energy Atlas. The format of the Energy Atlas will facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
Locations and attributes of wind turbines in New Mexico, 2009
Carr, Natasha B.; Diffendorfer, Jay E.; Fancher, Tammy S.; Latysh, Natalie E.; Leib, Kenneth J.; Matherne, Anne-Marie; Turner, Christine
2011-01-01
The New Mexico wind-turbine data series provides geospatial data for all wind turbines established within the State as of August 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, and county. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, year the facility went online, and development status of wind facility. Turbine locations were derived from 1-meter August 2009 true-color aerial photographs produced by the National Agriculture Imagery Program; the photographs have a positional accuracy of about + or - 5 meters. The location of turbines under construction during August 2009 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas currently (2011) in development by the U.S. Geological Survey. The Energy Atlas will synthesize data on existing and potential energy development in Colorado and New Mexico and will include additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools will be included in the Energy Atlas. The format of the Energy Atlas will facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.
2017-10-01
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.
Two-stage atlas subset selection in multi-atlas based image segmentation.
Zhao, Tingting; Ruan, Dan
2015-06-01
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thayer, K.J.
The past year has seen several of the Physics Division`s new research projects reach major milestones with first successful experiments and results: the atomic physics station in the Basic Energy Sciences Research Center at the Argonne Advanced Photon Source was used in first high-energy, high-brilliance x-ray studies in atomic and molecular physics; the Short Orbit Spectrometer in Hall C at the Thomas Jefferson National Accelerator (TJNAF) Facility that the Argonne medium energy nuclear physics group was responsible for, was used extensively in the first round of experiments at TJNAF; at ATLAS, several new beams of radioactive isotopes were developed andmore » used in studies of nuclear physics and nuclear astrophysics; the new ECR ion source at ATLAS was completed and first commissioning tests indicate excellent performance characteristics; Quantum Monte Carlo calculations of mass-8 nuclei were performed for the first time with realistic nucleon-nucleon interactions using state-of-the-art computers, including Argonne`s massively parallel IBM SP. At the same time other future projects are well under way: preparations for the move of Gammasphere to ATLAS in September 1997 have progressed as planned. These new efforts are imbedded in, or flowing from, the vibrant ongoing research program described in some detail in this report: nuclear structure and reactions with heavy ions; measurements of reactions of astrophysical interest; studies of nucleon and sub-nucleon structures using leptonic probes at intermediate and high energies; atomic and molecular structure with high-energy x-rays. The experimental efforts are being complemented with efforts in theory, from QCD to nucleon-meson systems to structure and reactions of nuclei. Finally, the operation of ATLAS as a national users facility has achieved a new milestone, with 5,800 hours beam on target for experiments during the past fiscal year.« less
Volunteer Computing Experience with ATLAS@Home
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.;
2017-10-01
ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.
The OSG Open Facility: an on-ramp for opportunistic scientific computing
NASA Astrophysics Data System (ADS)
Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.
2017-10-01
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.
The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayatilaka, B.; Levshina, T.; Sehgal, C.
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less
EnviroAtlas: Exploring Ecosystem Services and Biodiversity Data for the Nation.
EnviroAtlas is an online collection of interactive tools and spatially explicit data allowing users to explore the many benefits people receive from nature. The purpose of EnviroAtlas is to provide better access to consistently derived ecosystems and socio-economic data to facil...
Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
National Transportation Atlas Databases : 2002
DOT National Transportation Integrated Search
2002-01-01
The National Transportation Atlas Databases 2002 (NTAD2002) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2010
DOT National Transportation Integrated Search
2010-01-01
The National Transportation Atlas Databases 2010 (NTAD2010) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2006
DOT National Transportation Integrated Search
2006-01-01
The National Transportation Atlas Databases 2006 (NTAD2006) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2005
DOT National Transportation Integrated Search
2005-01-01
The National Transportation Atlas Databases 2005 (NTAD2005) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2008
DOT National Transportation Integrated Search
2008-01-01
The National Transportation Atlas Databases 2008 (NTAD2008) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2003
DOT National Transportation Integrated Search
2003-01-01
The National Transportation Atlas Databases 2003 (NTAD2003) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2014
DOT National Transportation Integrated Search
2014-01-01
The National Transportation Atlas Databases 2014 : (NTAD2014) is a set of nationwide geographic datasets of : transportation facilities, transportation networks, associated : infrastructure, and other political and administrative entities. : These da...
National Transportation Atlas Databases : 2004
DOT National Transportation Integrated Search
2004-01-01
The National Transportation Atlas Databases 2004 (NTAD2004) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2009
DOT National Transportation Integrated Search
2009-01-01
The National Transportation Atlas Databases 2009 (NTAD2009) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2007
DOT National Transportation Integrated Search
2007-01-01
The National Transportation Atlas Databases 2007 (NTAD2007) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2012
DOT National Transportation Integrated Search
2012-01-01
The National Transportation Atlas Databases 2012 (NTAD2012) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
National Transportation Atlas Databases : 2015
DOT National Transportation Integrated Search
2015-01-01
The National Transportation Atlas Databases 2015 : (NTAD2015) is a set of nationwide geographic datasets of : transportation facilities, transportation networks, associated : infrastructure, and other political and administrative entities. : These da...
National Transportation Atlas Databases : 2011
DOT National Transportation Integrated Search
2011-01-01
The National Transportation Atlas Databases 2011 (NTAD2011) is a set of nationwide geographic databases of transportation facilities, transportation networks, and associated infrastructure. These datasets include spatial information for transportatio...
The role of dedicated data computing centers in the age of cloud computing
NASA Astrophysics Data System (ADS)
Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr
2017-10-01
Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.
National Transportation Atlas Databases : 2013
DOT National Transportation Integrated Search
2013-01-01
The National Transportation Atlas Databases 2013 (NTAD2013) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...
The ATLAS multi-user upgrade and potential applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mustapha, B.; Nolen, J. A.; Savard, G.
With the recent integration of the CARIBU-EBIS charge breeder into the ATLAS accelerator system to provide for more pure and efficient charge breeding of radioactive beams, a multi-user upgrade of the ATLAS facility is being proposed to serve multiple users simultaneously. ATLAS was the first superconducting ion linac in the world and is the US DOE low-energy Nuclear Physics National User Facility. The proposed upgrade will take advantage of the continuous-wave nature of ATLAS and the pulsed nature of the EBIS charge breeder in order to simultaneously accelerate two beams with very close mass-to-charge ratios; one stable from the existingmore » ECR ion source and one radioactive from the newly commissioned EBIS charge breeder. In addition to enhancing the nuclear physics program, beam extraction at different points along the linac will open up the opportunity for other potential applications; for instance, material irradiation studies at ~ 1 MeV/u and isotope production at ~ 6 MeV/u or at the full ATLAS energy of ~ 15 MeV/u. The concept and proposed implementation of the ATLAS multi-user upgrade will be presented. Future plans to enhance the flexibility of this upgrade will also be presented.« less
The ATLAS multi-user upgrade and potential applications
NASA Astrophysics Data System (ADS)
Mustapha, B.; Nolen, J. A.; Savard, G.; Ostroumov, P. N.
2017-12-01
With the recent integration of the CARIBU-EBIS charge breeder into the ATLAS accelerator system to provide for more pure and efficient charge breeding of radioactive beams, a multi-user upgrade of the ATLAS facility is being proposed to serve multiple users simultaneously. ATLAS was the first superconducting ion linac in the world and is the US DOE low-energy Nuclear Physics National User Facility. The proposed upgrade will take advantage of the continuous-wave nature of ATLAS and the pulsed nature of the EBIS charge breeder in order to simultaneously accelerate two beams with very close mass-to-charge ratios; one stable from the existing ECR ion source and one radioactive from the newly commissioned EBIS charge breeder. In addition to enhancing the nuclear physics program, beam extraction at different points along the linac will open up the opportunity for other potential applications; for instance, material irradiation studies at ~1 MeV/u, isotope production and radiobiological studies at ~6 MeV/u and at the full ATLAS energy of ~15 MeV/u. The concept and proposed implementation of the ATLAS multi-user upgrade will be discussed. Future plans to enhance the flexibility of this upgrade will be presented.
Two-stage atlas subset selection in multi-atlas based image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
2015-06-15
Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less
Commissioning of a CERN Production and Analysis Facility Based on xrootd
NASA Astrophysics Data System (ADS)
Campana, Simone; van der Ster, Daniel C.; Di Girolamo, Alessandro; Peters, Andreas J.; Duellmann, Dirk; Coelho Dos Santos, Miguel; Iven, Jan; Bell, Tim
2011-12-01
The CERN facility hosts the Tier-0 of the four LHC experiments, but as part of WLCG it also offers a platform for production activities and user analysis. The CERN CASTOR storage technology has been extensively tested and utilized for LHC data recording and exporting to external sites according to experiments computing model. On the other hand, to accommodate Grid data processing activities and, more importantly, chaotic user analysis, it was realized that additional functionality was needed including a different throttling mechanism for file access. This paper will describe the xroot-based CERN production and analysis facility for the ATLAS experiment and in particular the experiment use case and data access scenario, the xrootd redirector setup on top of the CASTOR storage system, the commissioning of the system and real life experience for data processing and data analysis.
National Transportation Atlas Databases : 1999
DOT National Transportation Integrated Search
1999-01-01
The National Transportation Atlas Databases -- 1999 (NTAD99) is a set of national : geographic databases of transportation facilities. These databases include geospatial : information for transportation modal networks and intermodal terminals, and re...
National Transportation Atlas Databases : 2001
DOT National Transportation Integrated Search
2001-01-01
The National Transportation Atlas Databases-2001 (NTAD-2001) is a set of national geographic databases of transportation facilities. These databases include geospatial information for transportation modal networks and intermodal terminals and related...
National Transportation Atlas Databases : 1996
DOT National Transportation Integrated Search
1996-01-01
The National Transportation Atlas Databases -- 1996 (NTAD96) is a set of national : geographic databases of transportation facilities. These databases include geospatial : information for transportation modal networks and intermodal terminals, and re...
National Transportation Atlas Databases : 2000
DOT National Transportation Integrated Search
2000-01-01
The National Transportation Atlas Databases-2000 (NTAD-2000) is a set of national geographic databases of transportation facilities. These databases include geospatial information for transportation modal networks and intermodal terminals and related...
National Transportation Atlas Databases : 1997
DOT National Transportation Integrated Search
1997-01-01
The National Transportation Atlas Databases -- 1997 (NTAD97) is a set of national : geographic databases of transportation facilities. These databases include geospatial : information for transportation modal networks and intermodal terminals, and re...
ATLAS with CARIBU: A laboratory portrait
Pardo, Richard C.; Savard, Guy; Janssens, Robert V. F.
2016-03-21
The Argonne Tandem Linac Accelerator System (ATLAS) is the world's first superconducting accelerator for projectiles heavier than the electron. This unique system is a U.S. Department of Energy (DOE) national user research facility open to scientists from all over the world. Here, it is located within the Physics Division at Argonne National Laboratory and is one of five large scientific user facilities located at the laboratory.
NASA Astrophysics Data System (ADS)
Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.
2008-07-01
Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.
TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, X
2016-06-15
Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When amore » new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.« less
Brain transcriptome atlases: a computational perspective.
Mahfouz, Ahmed; Huisman, Sjoerd M H; Lelieveldt, Boudewijn P F; Reinders, Marcel J T
2017-05-01
The immense complexity of the mammalian brain is largely reflected in the underlying molecular signatures of its billions of cells. Brain transcriptome atlases provide valuable insights into gene expression patterns across different brain areas throughout the course of development. Such atlases allow researchers to probe the molecular mechanisms which define neuronal identities, neuroanatomy, and patterns of connectivity. Despite the immense effort put into generating such atlases, to answer fundamental questions in neuroscience, an even greater effort is needed to develop methods to probe the resulting high-dimensional multivariate data. We provide a comprehensive overview of the various computational methods used to analyze brain transcriptome atlases.
National Transportation Atlas Databases : 1998
DOT National Transportation Integrated Search
1998-01-01
The North American Transportation Atlas Data - 1998 (NORTAD) is a set of : geographic data sets for transportation facilities in Canada, Mexico, and the United : States. These data sets include geospatial information for transportation modal networks...
NASA Astrophysics Data System (ADS)
Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.
2015-12-01
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
Radial Inflow Turboexpander Redesign
DOE Office of Scientific and Technical Information (OSTI.GOV)
William G. Price
2001-09-24
Steamboat Envirosystems, LLC (SELC) was awarded a grant in accordance with the DOE Enhanced Geothermal Systems Project Development. Atlas-Copco Rotoflow (ACR), a radial expansion turbine manufacturer, was responsible for the manufacturing of the turbine and the creation of the new computer program. SB Geo, Inc. (SBG), the facility operator, monitored and assisted ACR's activities as well as provided installation and startup assistance. The primary scope of the project is the redesign of an axial flow turbine to a radial inflow turboexpander to provide increased efficiency and reliability at an existing facility. In addition to the increased efficiency and reliability, themore » redesign includes an improved reduction gear design, and improved shaft seal design, and upgraded control system and a greater flexibility of application« less
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
The United Launch Alliance Atlas V Centaur second stage is lifted up for transfer into the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
The ATLAS Tier-3 in Geneva and the Trigger Development Facility
NASA Astrophysics Data System (ADS)
Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration
2011-12-01
The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.
2009-04-27
CAPE CANAVERAL, Fla. –– The Atlas V first stage is being transferred from the hangar at the Atlas Space Operations Facility to the Vertical Integration Facility near Cape Canaveral Air Force Station's Launch Complex 41. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– The Atlas V first stage is moved from the hangar at the Atlas Space Operations Facility. It is going to the Vertical Integration Facility near Cape Canaveral Air Force Station's Launch Complex 41. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
FLOOR PLAN Dyess Air Force Base, Atlas F Missle ...
FLOOR PLAN - Dyess Air Force Base, Atlas F Missle Site S-8, Launch Control Center (LCC), Approximately 3 miles east of Winters, 500 feet southwest of Highway 17700, northwest of Launch Facility, Winters, Runnels County, TX
SECTION BB, FLOOR PLAN Dyess Air Force Base, Atlas ...
SECTION B-B, FLOOR PLAN - Dyess Air Force Base, Atlas F Missle Site S-8, Launch Facility, Approximately 3 miles east of Winters, 500 feet southwest of Highway 1770, center of complex, Winters, Runnels County, TX
Dyess Air Force Base, Atlas F Missle Site S8, Launch ...
Dyess Air Force Base, Atlas F Missle Site S-8, Launch Control Center (LCC), Approximately 3 miles east of Winters, 500 feet southwest of Highway 17700, northwest of Launch Facility, Winters, Runnels County, TX
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.
Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku
2017-07-01
Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.
SECTION AA, AXONOMETRIC Dyess Air Force Base, Atlas F ...
SECTION A-A, AXONOMETRIC - Dyess Air Force Base, Atlas F Missle Site S-8, Launch Control Center (LCC), Approximately 3 miles east of Winters, 500 feet southwest of Highway 17700, northwest of Launch Facility, Winters, Runnels County, TX
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
Operations are underway to stack the United Launch Alliance Atlas V Centaur second stage onto the first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
A close-up view of the United Launch Alliance Atlas V Centaur second stage as it travels to the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
The United Launch Alliance Atlas V Centaur second stage has been lifted up and transferred into the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
United Launch Alliance team members assist as operation begin to lift the Atlas V Centaur second stage into the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
The United Launch Alliance Atlas V Centaur second stage is lifted up by crane for transfer into Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Centaur Lift and Mate
2016-10-31
The United Launch Alliance Atlas V Centaur second stage has been mated to the first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abat, E.; Abbott, B.
2011-11-28
The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. Inmore » this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of the expression 'CSC studies' ('computing system commissioning'), which is occasionally referred to in these volumes. The work reported does generally assume that the detector is fully operational, and in this sense represents an idealised detector: establishing the best performance of the ATLAS detector with LHC proton-proton collisions is a challenging task for the future. The results summarised here therefore represent the best estimate of ATLAS capabilities before real operational experience of the full detector with beam. Unless otherwise stated, simulations also do not include the effect of additional interactions in the same or other bunch-crossings, and the effect of neutron background is neglected. Thus simulations correspond to the low-luminosity performance of the ATLAS detector. This report is broadly divided into two parts: firstly the performance for identification of physics objects is examined in detail, followed by a detailed assessment of the performance of the trigger system. This part is subdivided into chapters surveying the capabilities for charged particle tracking, each of electron/photon, muon and tau identification, jet and missing transverse energy reconstruction, b-tagging algorithms and performance, and finally the trigger system performance. In each chapter of the report, there is a further subdivision into shorter notes describing different aspects studied. The second major subdivision of the report addresses physics measurement capabilities, and new physics search sensitivities. Individual chapters in this part discuss ATLAS physics capabilities in Standard Model QCD and electroweak processes, in the top quark sector, in b-physics, in searches for Higgs bosons, supersymmetry searches, and finally searches for other new particles predicted in more exotic models.« less
Image database for digital hand atlas
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente; Dey, Partha S.; Gertych, Arkadiusz; Pospiech-Kurkowska, Sywia
2003-05-01
Bone age assessment is a procedure frequently performed in pediatric patients to evaluate their growth disorder. A commonly used method is atlas matching by a visual comparison of a hand radiograph with a small reference set of old Greulich-Pyle atlas. We have developed a new digital hand atlas with a large set of clinically normal hand images of diverse ethnic groups. In this paper, we will present our system design and implementation of the digital atlas database to support the computer-aided atlas matching for bone age assessment. The system consists of a hand atlas image database, a computer-aided diagnostic (CAD) software module for image processing and atlas matching, and a Web user interface. Users can use a Web browser to push DICOM images, directly or indirectly from PACS, to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, are then extracted and compared with patterns from the atlas image database to assess the bone age. The digital atlas method built on a large image database and current Internet technology provides an alternative to supplement or replace the traditional one for a quantitative, accurate and cost-effective assessment of bone age.
Digital hand atlas and computer-aided bone age assessment via the Web
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente
1999-07-01
A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
Inside the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, the solid rocket motor is mated to the United Launch Alliance Atlas V rocket for its upcoming launch. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
Inside the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, the solid rocket motor is being mated to the United Launch Alliance Atlas V rocket for its upcoming launch. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
The solid rocket motor is lifted on its transporter for mating to the United Launch Alliance Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
Evolution of the ATLAS distributed computing system during the LHC long shutdown
NASA Astrophysics Data System (ADS)
Campana, S.; Atlas Collaboration
2014-06-01
The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.
Locations and attributes of wind turbines in Colorado, 2011
Carr, Natasha B.; Diffendorfer, James E.; Fancher, Tammy; Hawkins, Sarah J.; Latysh, Natalie; Leib, Kenneth J.; Matherne, Anne Marie
2013-01-01
This dataset represents an update to U.S. Geological Survey Data Series 597. Locations and attributes of wind turbines in Colorado, 2009 (available at http://pubs.usgs.gov/ds/597/). This updated Colorado wind turbine Data Series provides geospatial data for all 1,204 wind turbines established within the State of Colorado as of September 2011, an increase of 297 wind turbines from 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, county, and development status of the wind turbine. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, and year the facility went online. The locations of turbines are derived from 1-meter true-color aerial photographs produced by the National Agriculture Imagery Program (NAIP); the photographs have a positional accuracy of about ±5 meters. Locations of turbines constructed during or prior to August 2009 are based on August 2009 NAIP imagery and turbine locations constructed after August 2009 were based on September 2011 NAIP imagery. The location of turbines under construction during September 2011 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas developed by the U.S. Geological Survey (http://my.usgs.gov/eerma/). The Energy Atlas synthesizes data on existing and potential energy development in Colorado and New Mexico and includes additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools also are included in the Energy Atlas. The format of the Energy Atlas is designed to facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
Locations and attributes of wind turbines in New Mexico, 2011
Carr, Natasha B.; Diffendorfer, James B.; Fancher, Tammy; Hawkins, Sarah J.; Latysh, Natalie; Leib, Kenneth J.; Matherne, Anne Marie
2013-01-01
This dataset represents an update to U.S. Geological Survey Data Series 596. Locations and attributes of wind turbines in New Mexico, 2009 (available at http://pubs.usgs.gov/ds/596/).This updated New Mexico wind turbine Data Series provides geospatial data for all 562 wind turbines established within the State of New Mexico as of June 2011, an increase of 155 wind turbines from 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, county, and development status of wind turbine. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, and year the facility went online. The locations of turbines are derived from 1-meter true-color aerial photographs produced by the National Agriculture Imagery Program (NAIP); the photographs have a positional accuracy of about ±5 meters. The locations of turbines constructed during or prior to August 2009 are based on August 2009 NAIP imagery and turbine locations constructed after August 2009 were based June 2011 NAIP imagery. The location of turbines under construction during June 2011 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas developed by the U.S. Geological Survey (http://my.usgs.gov/eerma/). The Energy Atlas synthesizes data on existing and potential energy development in Colorado and New Mexico and includes additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools also are included in the Energy Atlas. The format of the Energy Atlas is designed to facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
NASA Astrophysics Data System (ADS)
Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.;
2017-10-01
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
Discovery through maps: Exploring real-world applications of ecosystem services
Background/Question/Methods U.S. EPA’s EnviroAtlas provides a collection of interactive tools and resources for exploring ecosystem goods and services. The purpose of EnviroAtlas is to provide better access to consistently derived ecosystems and socio-economic data to facil...
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
The solid rocket motor has been lifted to the vertical position and moved into the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida for mating to the United Launch Alliance Atlas V rocket. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
Preparations are underway to lift the solid rocket motor up from its transporter for mating to the United Launch Alliance Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
The solid rocket motor has been lifted to the vertical position for mating to the United Launch Alliance Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
Technicians with United Launch Alliance (ULA) assist as the solid rocket motor is mated to the ULA Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
Technicians with United Launch Alliance (ULA) monitor the progress as the solid rocket motor is mated to the ULA Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
Distributing and storing data efficiently by means of special datasets in the ATLAS collaboration
NASA Astrophysics Data System (ADS)
Köneke, Karsten; ATLAS Collaboration
2011-12-01
With the start of the LHC physics program, the ATLAS experiment started to record vast amounts of data. This data has to be distributed and stored on the world-wide computing grid in a smart way in order to enable an effective and efficient analysis by physicists. This article describes how the ATLAS collaboration chose to create specialized reduced datasets in order to efficiently use computing resources and facilitate physics analyses.
Integrating Retraction Modeling Into an Atlas-Based Framework for Brain Shift Prediction
Chen, Ishita; Ong, Rowena E.; Simpson, Amber L.; Sun, Kay; Thompson, Reid C.
2015-01-01
In recent work, an atlas-based statistical model for brain shift prediction, which accounts for uncertainty in the intraoperative environment, has been proposed. Previous work reported in the literature using this technique did not account for local deformation caused by surgical retraction. It is challenging to precisely localize the retractor location prior to surgery and the retractor is often moved in the course of the procedure. This paper proposes a technique that involves computing the retractor-induced brain deformation in the operating room through an active model solve and linearly superposing the solution with the precomputed deformation atlas. As a result, the new method takes advantage of the atlas-based framework’s accounting for uncertainties while also incorporating the effects of retraction with minimal intraoperative computing. This new approach was tested using simulation and phantom experiments. The results showed an improvement in average shift correction from 50% (ranging from 14 to 81%) for gravity atlas alone to 80% using the active solve retraction component (ranging from 73 to 85%). This paper presents a novel yet simple way to integrate retraction into the atlas-based brain shift computation framework. PMID:23864146
ATLAS Large Scale Thin Gap Chambers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soha, Aria
This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of the ATLAS sTGC New Small Wheel collaboration who have committed to participate in beam tests to be carried out during the FY2014 Fermilab Test Beam Facility program.
Atlas V OA-7 LVOS Atlas Booster on Stand
2017-02-22
The first stage of the United Launch Alliance (ULA) Atlas V rocket is lifted by crane to vertical as it is moved into the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The rocket is being prepared for Orbital ATK's seventh commercial resupply mission, CRS-7, to the International Space Station. Orbital ATK's CYGNUS pressurized cargo module is scheduled to launch atop ULA's Atlas V rocket from Pad 41 on March 19, 2017. CYGNUS will deliver thousands of pounds of supplies, equipment and scientific research materials to the space station
Nuclear Computational Low Energy Initiative (NUCLEI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Sanjay K.
This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS andmore » FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less
NASA Technical Reports Server (NTRS)
Mogilevsky, M.
1973-01-01
The Category A computer systems at KSC (Al and A2) which perform scientific and business/administrative operations are described. This data division is responsible for scientific requirements supporting Saturn, Atlas/Centaur, Titan/Centaur, Titan III, and Delta vehicles, and includes realtime functions, Apollo-Soyuz Test Project (ASTP), and the Space Shuttle. The work is performed chiefly on the GEL-635 (Al) system located in the Central Instrumentation Facility (CIF). The Al system can perform computations and process data in three modes: (1) real-time critical mode; (2) real-time batch mode; and (3) batch mode. The Division's IBM-360/50 (A2) system, also at the CIF, performs business/administrative data processing such as personnel, procurement, reliability, financial management and payroll, real-time inventory management, GSE accounting, preventive maintenance, and integrated launch vehicle modification status.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
The solid rocket motor has been lifted to the vertical position on its transporter for mating to the United Launch Alliance Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. NOAA's Geostationary Operational Environmental Satellite (GOES-R) will launch aboard the Atlas V rocket this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
Locations and attributes of utility-scale solar power facilities in Colorado and New Mexico, 2011
Ignizio, Drew A.; Carr, Natasha B.
2012-01-01
The data series consists of polygonal boundaries for utility-scale solar power facilities (both photovoltaic and concentrating solar power) located within Colorado and New Mexico as of December 2011. Attributes captured for each facility include the following: facility name, size/production capacity (in MW), type of solar technology employed, location, state, operational status, year the facility came online, and source identification information. Facility locations and perimeters were derived from 1-meter true-color aerial photographs (2011) produced by the National Agriculture Imagery Program (NAIP); the photographs have a positional accuracy of about ±5 meters (accessed from the NAIP GIS service: http://gis.apfo.usda.gov/arcgis/services). Solar facility perimeters represent the full extent of each solar facility site, unless otherwise noted. When visible, linear features such as fences or road lines were used to delineate the full extent of the solar facility. All related equipment including buildings, power substations, and other associated infrastructure were included within the solar facility. If solar infrastructure was indistinguishable from adjacent infrastructure, or if solar panels were installed on existing building tops, only the solar collecting equipment was digitized. The "Polygon" field indicates whether the "equipment footprint" or the full "site outline" was digitized. The spatial accuracy of features that represent site perimeters or an equipment footprint is estimated at +/- 10 meters. Facilities under construction or not fully visible in the NAIP imagery at the time of digitization (December 2011) are represented by an approximate site outline based on the best available information and documenting materials. The spatial accuracy of these facilities cannot be estimated without more up-to-date imagery – users are advised to consult more recent imagery as it becomes available. The "Status" field provides information about the operational status of each facility as of December 2011. This data series contributes to an Online Interactive Energy Atlas currently in development by the U.S. Geological Survey. The Energy Atlas will synthesize data on existing and potential energy development in Colorado and New Mexico and will include additional natural resource data layers. This information may be used by decision makers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, metadata, and decision support tools will be included in the Energy Atlas. The format of the Energy Atlas will facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development activities.
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter
2015-12-01
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
2011-07-27
CAPE CANAVERAL, Fla. -- At Space Launch Complex 41, the Atlas rocket stacked inside the Vertical Integration Facility stands ready to receive the Juno spacecraft, enclosed in an Atlas payload fairing. The spacecraft was prepared for launch in the Astrotech Space Operations' payload processing facility in Titusville, Fla. The fairing will protect the spacecraft from the impact of aerodynamic pressure and heating during ascent and will be jettisoned once the spacecraft is outside the Earth's atmosphere. Juno is scheduled to launch Aug. 5 aboard a United Launch Alliance Atlas V rocket from Cape Canaveral Air Force Station in Florida. The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information, visit www.nasa.gov/juno. Photo credit: NASA/Cory Huston
Overview of ATLAS PanDA Workload Management
NASA Astrophysics Data System (ADS)
Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G. A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.; ATLAS Collaboration
2011-12-01
The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.
Overview of ATLAS PanDA Workload Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maeno T.; De K.; Wenaus T.
2011-01-01
The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in additionmore » to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.« less
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.
2014-06-01
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
ATLAS, an integrated structural analysis and design system. Volume 1: ATLAS user's guide
NASA Technical Reports Server (NTRS)
Dreisbach, R. L. (Editor)
1979-01-01
Some of the many analytical capabilities provided by the ATLAS Version 4.0 System in the logical sequence are described in which model-definition data are prepared and the subsequent computer job is executed. The example data presented and the fundamental technical considerations that are highlighted can be used as guides during the problem solving process. This guide does not describe the details of the ATLAS capabilities, but provides an introduction to the new user of ATLAS to the level at which the complete array of capabilities described in the ATLAS User's Manual can be exploited fully.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farbin, Amir
2015-07-15
This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".
Integration of the Chinese HPC Grid in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.
A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans
2014-01-01
An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results. PMID:25276219
Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia
2018-04-01
Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF
NASA Astrophysics Data System (ADS)
Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.
2015-12-01
The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.
2011-07-27
CAPE CANAVERAL, Fla. -- At Space Launch Complex 41, the Juno spacecraft, enclosed in an Atlas payload fairing, nears the top of the Vertical Integration Facility where it will be positioned on top of the Atlas rocket already stacked inside. The spacecraft was prepared for launch in the Astrotech Space Operations' payload processing facility in Titusville, Fla. The fairing will protect the spacecraft from the impact of aerodynamic pressure and heating during ascent and will be jettisoned once the spacecraft is outside the Earth's atmosphere. Juno is scheduled to launch Aug. 5 aboard a United Launch Alliance Atlas V rocket from Cape Canaveral Air Force Station in Florida. The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information, visit www.nasa.gov/juno. Photo credit: NASA/Cory Huston
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
Consolidation of cloud computing in ATLAS
NASA Astrophysics Data System (ADS)
Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration
2017-10-01
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
NASA Astrophysics Data System (ADS)
Sánchez-Martínez, V.; Borges, G.; Borrego, C.; del Peso, J.; Delfino, M.; Gomes, J.; González de la Hoz, S.; Pacheco Pages, A.; Salt, J.; Sedov, A.; Villaplana, M.; Wolters, H.
2014-06-01
In this contribution we describe the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the context of the GRID Computing and Data Distribution Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier-1 and Tier-2s is summarized. The data distribution over all ATLAS destinations is shown, focusing on the number of files transferred and the size of the data. The status and distribution of simulation and analysis jobs within the cloud are discussed. The Distributed Analysis tools used to perform physics analysis are explained as well. Cloud performance in terms of the availability and reliability of its sites is discussed. The effect of the changes in the ATLAS Computing Model on the cloud is analyzed. Finally, the readiness of the Iberian Cloud towards the first Long Shutdown (LS1) is evaluated and an outline of the foreseen actions to take in the coming years is given. The shutdown will be a good opportunity to improve and evolve the ATLAS Distributed Computing system to prepare for the future challenges of the LHC operation.
ATLAS and LHC computing on CRAY
NASA Astrophysics Data System (ADS)
Sciacca, F. G.; Haug, S.; ATLAS Collaboration
2017-10-01
Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar
2009-10-01
Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.
ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog
NASA Technical Reports Server (NTRS)
Gray, F. P., Jr. (Editor)
1979-01-01
A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.
InSight Atlas V Fairing Rotate to Vertical
2018-02-07
In the Astrotech facility at Vandenberg Air Force Base in California, the payload fairing for the United Launch Alliance (ULA) Atlas V for NASA's upcoming Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, or InSight, mission to land on Mars is lifted to the vertical position. InSight is the first mission to explore the Red Planet's deep interior. It will investigate processes that shaped the rocky planets of the inner solar system including Earth. Liftoff atop a ULA Atlas V rocket is scheduled for May 5, 2018.
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.
2015-12-01
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
2015-06-15
Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel
2015-07-01
Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.
Energy Frontier Research With ATLAS: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, John; Black, Kevin; Ahlen, Steve
2016-06-14
The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less
Integration of PanDA workload management system with Titan supercomputer at OLCF
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Atlas2 Cloud: a framework for personal genome analysis in the cloud
2012-01-01
Background Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. Results We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. Conclusions We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms. PMID:23134663
Atlas2 Cloud: a framework for personal genome analysis in the cloud.
Evani, Uday S; Challis, Danny; Yu, Jin; Jackson, Andrew R; Paithankar, Sameer; Bainbridge, Matthew N; Jakkamsetti, Adinarayana; Pham, Peter; Coarfa, Cristian; Milosavljevic, Aleksandar; Yu, Fuli
2012-01-01
Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms.
Production experience with the ATLAS Event Service
NASA Astrophysics Data System (ADS)
Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.
High-Throughput Computing on High-Performance Platforms: A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleynik, D; Panitkin, S; Matteo, Turilli
The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less
Contributions to the NUCLEI SciDAC-3 Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogner, Scott; Nazarewicz, Witek
This is the Final Report for Michigan State University for the NUCLEI SciDAC-3 project. The NUCLEI project, as defined by the scope of work, has developed, implemented and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics studied included the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques used included Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program emphasized areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS at ANL and FRIB at MSU (nuclear structuremore » and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrinoless double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less
OA-7 Atlas V Centaur mate to Booster
2017-02-23
The Centaur upper stage of the United Launch Alliance (ULA) Atlas V rocket arrives at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Centaur stage is lifted and mated to the first stage booster. The rocket is being prepared for Orbital ATK's seventh commercial resupply mission, CRS-7, to the International Space Station. Orbital ATK's CYGNUS pressurized cargo module is scheduled to launch atop ULA's Atlas V rocket from Pad 41 on March 19, 2017. CYGNUS will deliver 7,600 of pounds of supplies, equipment and scientific research materials to the space station
InSight Atlas V Fairing Rotate to Vertical
2018-02-07
In the Astrotech facility at Vandenberg Air Force Base in California, technicians and engineers inspect the payload fairing for the United Launch Alliance (ULA) Atlas V for NASA's upcoming Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, or InSight, mission to land on Mars after it was lifted to the vertical position. InSight is the first mission to explore the Red Planet's deep interior. It will investigate processes that shaped the rocky planets of the inner solar system including Earth. Liftoff atop a ULA Atlas V rocket is scheduled for May 5, 2018.
ERIC Educational Resources Information Center
Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G.; Marchenko, Yevgen; Volkau, Ihar
2009-01-01
Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to "Terminologia…
Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L
2017-02-01
Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multi-atlas-based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.
A whole brain atlas with sub-parcellation of cortical gyri using resting fMRI
NASA Astrophysics Data System (ADS)
Joshi, Anand A.; Choi, Soyoung; Sonkar, Gaurav; Chong, Minqi; Gonzalez-Martinez, Jorge; Nair, Dileep; Shattuck, David W.; Damasio, Hanna; Leahy, Richard M.
2017-02-01
The new hybrid-BCI-DNI atlas is a high-resolution MPRAGE, single-subject atlas, constructed using both anatomical and functional information to guide the parcellation of the cerebral cortex. Anatomical labeling was performed manually on coronal single-slice images guided by sulcal and gyral landmarks to generate the original (non-hybrid) BCI-DNI atlas. Functional sub-parcellations of the gyral ROIs were then generated from 40 minimally preprocessed resting fMRI datasets from the HCP database. Gyral ROIs were transferred from the BCI-DNI atlas to the 40 subjects using the HCP grayordinate space as a reference. For each subject, each gyral ROI was subdivided using the fMRI data by applying spectral clustering to a similarity matrix computed from the fMRI time-series correlations between each vertex pair. The sub-parcellations were then transferred back to the original cortical mesh to create the subparcellated hBCI-DNI atlas with a total of 67 cortical regions per hemisphere. To assess the stability of the gyral subdivisons, a separate set of 60 HCP datasets were processed as follows: 1) coregistration of the structural scans to the hBCI-DNI atlas; 2) coregistration of the anatomical BCI-DNI atlas without functional subdivisions, followed by sub-parcellation of each subject's resting fMRI data as described above. We then computed consistency between the anatomically-driven delineation of each gyral subdivision and that obtained per subject using individual fMRI data. The gyral sub-parcellations generated by atlas-based registration show variable but generally good overlap of the confidence intervals with the resting fMRI-based subdivisions. These consistency measures will provide a quantitative measure of reliability of each subdivision to users of the atlas.
Enhancing atlas based segmentation with multiclass linear classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr
Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible localmore » registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.« less
2016-11-09
Enclosed in its payload fairing, NOAA's Geostationary Operational Environmental Satellite (GOES-R) is mated to the United Launch Alliance Atlas V Centaur upper stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The satellite will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
25. VIEW OF ATLAS CONTROL CONSOLE NEAR NORTHEAST CORNER OF ...
25. VIEW OF ATLAS CONTROL CONSOLE NEAR NORTHEAST CORNER OF SLC-3W CONTROL ROOM. CONSOLE INCLUDES TELEVISION CONTROL, FACILITIES, AND VEHICLE (MISSILE) POWER PANELS. FROM LEFT TO RIGHT IN BACKGROUND: MILITARY-TIME CLOCK, BASE OF BUNKER PERISCOPE, AND STAIRS TO ESCAPE TUNNEL. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Cortical bone thickening in Type A posterior atlas arch defects: experimental report.
Sanchis-Gimeno, Juan A; Llido, Susanna; Guede, David; Martinez-Soriano, Francisco; Ramon Caeiro, Jose; Blanco-Perez, Esther
2017-03-01
To date, no information about the cortical bone microstructural properties in atlas vertebrae with posterior arch defects has been reported. To test if there is an increased cortical bone thickening in atlases with Type A posterior atlas arch defects in an experimental model. Micro-computed tomography (CT) study on cadaveric atlas vertebrae. We analyzed the cortical bone thickness, the cortical volume, and the medullary volume (SkyScan 1172 Bruker micro-CT NV, Kontich, Belgium) in cadaveric dry vertebrae with a Type A atlas arch defect and normal control vertebrae. The micro-CT study revealed significant differences in cortical bone thickness (p=.005), cortical volume (p=.003), and medullary volume (p=.009) values between the normal and the Type A vertebrae. Type A congenital atlas arch defects present a cortical bone thickening that may play a protective role against atlas fractures. Copyright © 2016 Elsevier Inc. All rights reserved.
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre
2016-02-01
After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua; Bai, Wenjia
Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluatingmore » the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. Conclusions: The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.« less
A geochemical atlas of North Carolina, USA
Reid, J.C.
1993-01-01
A geochemical atlas of North Carolina, U.S.A., was prepared using National Uranium Resource Evaluation (NURE) stream-sediment data. Before termination of the NURE program, sampling of nearly the entire state (48,666 square miles of land area) was completed and geochemical analyses were obtained. The NURE data are applicable to mineral exploration, agriculture, waste disposal siting issues, health, and environmental studies. Applications in state government include resource surveys to assist mineral exploration by identifying geochemical anomalies and areas of mineralization. Agriculture seeks to identify areas with favorable (or unfavorable) conditions for plant growth, disease, and crop productivity. Trace elements such as cobalt, copper, chromium, iron, manganese, zinc, and molybdenum must be present within narrow ranges in soils for optimum growth and productivity. Trace elements as a contributing factor to disease are of concern to health professionals. Industry can use pH and conductivity data for water samples to site facilities which require specific water quality. The North Carolina NURE database consists of stream-sediment samples, groundwater samples, and stream-water analyses. The statewide database consists of 6,744 stream-sediment sites, 5,778 groundwater sample sites, and 295 stream-water sites. Neutron activation analyses were provided for U, Br, Cl, F, Mn, Na, Al, V, Dy in groundwater and stream water, and for U, Th, Hf, Ce, Fe, Mn, Na, Sc, Ti, V, Al, Dy, Eu, La, Sm, Yb, and Lu in stream sediments. Supplemental analyses by other techniques were reported on U (extractable), Ag, As, Ba, Be, Ca, Co, Cr, Cu, K, Li, Mg, Mo, Nb, Ni, P, Pb, Se, Sn, Sr, W, Y, and Zn for 4,619 stream-sediment samples. A small subset of 334 stream samples was analyzed for gold. The goal of the atlas was to make available the statewide NURE data with minimal interpretation to enable prospective users to modify and manipulate the data for their end use. The atlas provides only very general indication of geochemical distribution patterns and should not be used for site specific studies. The atlas maps for each element were computer-generated at the state's geographic information system (Center for Geographic Information and Analysis [CGIA]). The Division of Statistics and Information Services provided input files. The maps in the atlas are point maps. Each sample is represented by a symbol generally corresponding to a quartile class. Other reports will transmit sample and analytical data for state regions. Data are tentatively planned to be available on disks in spreadsheet format for personal computers. During the second phase of this project, stream-sediment samples are being assigned to state geologic map unit names using a GIS system to determine background and anomaly values. Subsequent publications will make this geochemical data and accompanying interpretations available to a wide spectrum of interdisciplinary users. ?? 1993.
Evolution of the ATLAS PanDA workload management system for exascale computational science
NASA Astrophysics Data System (ADS)
Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration
2014-06-01
An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
Iglesias, Juan Eugenio; Augustinack, Jean C; Nguyen, Khoa; Player, Christopher M; Player, Allison; Wright, Michelle; Roy, Nicole; Frosch, Matthew P; McKee, Ann C; Wald, Lawrence L; Fischl, Bruce; Van Leemput, Koen
2015-07-15
Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise delineations were made possible by the extraordinary resolution of the scans. In addition to the subregions, manual annotations for neighboring structures (e.g., amygdala, cortex) were obtained from a separate dataset of in vivo, T1-weighted MRI scans of the whole brain (1mm resolution). The manual labels from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using an algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimer's disease subjects and elderly controls with 88% accuracy in standard resolution (1mm) T1 data, significantly outperforming the atlas in FreeSurfer version 5.3 (86% accuracy) and classification based on whole hippocampal volume (82% accuracy). Copyright © 2015. Published by Elsevier Inc.
Itazawa, Tomoko; Tamaki, Yukihisa; Komiyama, Takafumi; Nishimura, Yasumasa; Nakayama, Yuko; Ito, Hiroyuki; Ohde, Yasuhisa; Kusumoto, Masahiko; Sakai, Shuji; Suzuki, Kenji; Watanabe, Hirokazu; Asamura, Hisao
2017-01-01
The purpose of this study was to develop a consensus-based computed tomographic (CT) atlas that defines lymph node stations in radiotherapy for lung cancer based on the lymph node map of the International Association for the Study of Lung Cancer (IASLC). A project group in the Japanese Radiation Oncology Study Group (JROSG) initially prepared a draft of the atlas in which lymph node Stations 1–11 were illustrated on axial CT images. Subsequently, a joint committee of the Japan Lung Cancer Society (JLCS) and the Japanese Society for Radiation Oncology (JASTRO) was formulated to revise this draft. The committee consisted of four radiation oncologists, four thoracic surgeons and three thoracic radiologists. The draft prepared by the JROSG project group was intensively reviewed and discussed at four meetings of the committee over several months. Finally, we proposed definitions for the regional lymph node stations and the consensus-based CT atlas. This atlas was approved by the Board of Directors of JLCS and JASTRO. This resulted in the first official CT atlas for defining regional lymph node stations in radiotherapy for lung cancer authorized by the JLCS and JASTRO. In conclusion, the JLCS–JASTRO consensus-based CT atlas, which conforms to the IASLC lymph node map, was established. PMID:27609192
TDRS-M: Atlas V 2nd Stage Erection/Off-site Verticle Integration (OVI)
2017-07-13
A United Launch Alliance Atlas V Centaur upper stage arrives at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. United Launch Alliance team members monitor the operation progress as the Centaur upper stage is lifted and mated to the Atlas V booster in the vertical position. The rocket is scheduled to help launch the Tracking and Data Relay Satellite, TDRS-M. It will be the latest spacecraft destined for the agency's constellation of communications satellites that allows nearly continuous contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. Liftoff atop the ULA Atlas V rocket is scheduled to take place from Cape Canaveral's Space Launch Complex 41 in early August.
PanDA Pilot Submission using Condor-G: Experience and Improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao X.; Hover John; Wlodek Tomasz
2011-01-01
PanDA (Production and Distributed Analysis) is the workload management system of the ATLAS experiment, used to run managed production and user analysis jobs on the grid. As a late-binding, pilot-based system, the maintenance of a smooth and steady stream of pilot jobs to all grid sites is critical for PanDA operation. The ATLAS Computing Facility (ACF) at BNL, as the ATLAS Tier1 center in the US, operates the pilot submission systems for the US. This is done using the PanDA 'AutoPilot' scheduler component which submits pilot jobs via Condor-G, a grid job scheduling system developed at the University of Wisconsin-Madison.more » In this paper, we discuss the operation and performance of the Condor-G pilot submission at BNL, with emphasis on the challenges and issues encountered in the real grid production environment. With the close collaboration of Condor and PanDA teams, the scalability and stability of the overall system has been greatly improved over the last year. We review improvements made to Condor-G resulting from this collaboration, including isolation of site-based issues by running a separate Gridmanager for each remote site, introduction of the 'Nonessential' job attribute to allow Condor to optimize its behavior for the specific character of pilot jobs, better understanding and handling of the Gridmonitor process, as well as better scheduling in the PanDA pilot scheduler component. We will also cover the monitoring of the health of the system.« less
Congenital bipartite atlas with hypodactyly in a dog: clinical, radiographic and CT findings.
Wrzosek, M; Płonek, M; Zeira, O; Bieżyński, J; Kinda, W; Guziński, M
2014-07-01
A three-year-old Border collie was diagnosed with a bipartite atlas and bilateral forelimb hypodactyly. The dog showed signs of acute, non-progressive neck pain, general stiffness and right thoracic limb non-weight-bearing lameness. Computed tomography imaging revealed a bipartite atlas with abaxial vertical bone proliferation, which was the cause of the clinical signs. In addition, bilateral hypodactyly of the second and fifth digits was incidentally found. This report suggests that hypodactyly may be associated with atlas malformations. © 2014 British Small Animal Veterinary Association.
GOES-S Countdown to T-Zero, Episode 4: Ready to Roll
2018-02-28
NOAA's GOES-S is encapsulated in its payload fairing inside Astrotech Space Operations in Titusville, Florida, and transported to the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station. It was hoisted up and secured to the United Launch Alliance Atlas V rocket. GOES-S, the next in a series of advanced weather satellites, launched aboard the Atlas V on March 1, 2018.
NASA Technical Reports Server (NTRS)
Dreisbach, R. L. (Editor)
1979-01-01
The input data and execution control statements for the ATLAS integrated structural analysis and design system are described. It is operational on the Control Data Corporation (CDC) 6600/CYBER computers in a batch mode or in a time-shared mode via interactive graphic or text terminals. ATLAS is a modular system of computer codes with common executive and data base management components. The system provides an extensive set of general-purpose technical programs with analytical capabilities including stiffness, stress, loads, mass, substructuring, strength design, unsteady aerodynamics, vibration, and flutter analyses. The sequence and mode of execution of selected program modules are controlled via a common user-oriented language.
Automating usability of ATLAS Distributed Computing resources
NASA Astrophysics Data System (ADS)
Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration
2014-06-01
The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.
The evolution of computer monitoring of real time data during the Atlas Centaur launch countdown
NASA Technical Reports Server (NTRS)
Thomas, W. F.
1981-01-01
In the last decade, improvements in computer technology have provided new 'tools' for controlling and monitoring critical missile systems. In this connection, computers have gradually taken a large role in monitoring all flights and ground systems on the Atlas Centaur. The wide body Centaur which will be launched in the Space Shuttle Cargo Bay will use computers to an even greater extent. It is planned to use the wide body Centaur to boost the Galileo spacecraft toward Jupiter in 1985. The critical systems which must be monitored prior to liftoff are examined. Computers have now been programmed to monitor all critical parameters continuously. At this time, there are two separate computer systems used to monitor these parameters.
Sex-Related Differences in the Developmental Morphology of the Atlas: A Computed Tomography Study.
Asukai, Mitsuru; Fujita, Tomotada; Suzuki, Daisuke; Nishida, Tatsuya; Ohishi, Tsuyoshi; Matsuyama, Yukihiro
2018-05-15
A retrospective study. To elucidate sex-related differences in the age at synchondroses closure, the normative size of the atlas, and the ossification patterns of the atlas in Japanese children. The atlas develops from three ossification centers during childhood. The anterior and posterior synchondroses, which are separate ossification centers, mimic fracture lines on computed tomography (CT). Sex-related differences of age dependent morphological changes of the atlas in a large sample size have not been reported. This study analyzed data of 688 subjects (449 boys) between 0 and 18 years old who underwent CT examination of the head and/or neck between January 2010 and July 2016. The age at synchondroses closure, anteroposterior outer, inner, and spinal canal widths of the atlas, and variations of the ossification centers were examined. Anterior synchondroses closed by 10 years in boys and by 7 years in girls. Significant earlier closure of anterior synchondroses was observed in girls than in boys (P < 0.05 at 4 and 5 years old). Posterior synchondrosis closed by 6 years in boys and by 5 years in girls. The outer, inner, and spinal canal widths increased up to 10 to 15 years in both sexes, although all three parameters in girls peaked 3 years earlier than those in boys. All parameters in boys were significantly larger than those in girls, except in the 10- to 12-year-old age category. Two or more ossification centers in the anterior arch were observed in 18.3% subjects, and 6% had midline ossification centers in the posterior arch of the atlas. Distinct sex-related differences in the age at anterior synchondroses closure and the size of the atlas were observed in Japanese children. Knowledge of morphological features of the atlas could help distinguish fractures from synchondroses. 3.
ULA's Atlas V for Boeing's Orbital Flight Test
2017-10-24
The Atlas V rocket that will launch Boeing’s CST-100 Starliner spacecraft on the company’s uncrewed Orbital Flight Test for NASA’s Commercial Crew Program is coming together inside a United Launch Alliance facility in Decatur, Alabama. The flight test is intended to prove the design of the integrated space system prior to the Crew Flight Test. These events are part of NASA’s required certification process as the company works to regularly fly astronauts to and from the International Space Station. Boeing's Starliner will launch on the United Launch Alliance Atlas V rocket from Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida.
Atlas Fractures and Atlas Osteosynthesis: A Comprehensive Narrative Review.
Kandziora, Frank; Chapman, Jens R; Vaccaro, Alexander R; Schroeder, Gregory D; Scholz, Matti
2017-09-01
Most atlas fractures are the result of compression forces. They are often combined with fractures of the axis and especially with the odontoid process. Multiple classification systems for atlas fractures have been described. For an adequate diagnosis, a computed tomography is mandatory. To distinguish between stable and unstable atlas injury, it is necessary to evaluate the integrity of the transverse atlantal ligament (TAL) by magnetic resonance imaging and to classify the TAL lesion. Studies comparing conservative and operative management of unstable atlas fractures are unfortunately not available in the literature; neither are studies comparing different operative treatment strategies. Hence all treatment recommendations are based on low level evidence. Most of atlas fractures are stable and will be successfully managed by immobilization in a soft/hard collar. Unstable atlas fractures may be treated conservatively by halo-fixation, but nowadays more and more surgeons prefer surgery because of the potential discomfort and complications of halo-traction. Atlas fractures with a midsubstance ligamentous disruption of TAL or severe bony ligamentous avulsion can be treated by a C1/2 fusion. Unstable atlas fractures with moderate bony ligamentous avulsion may be treated by atlas osteosynthesis. Although the evidence for the different treatment strategies of atlas fractures is low, atlas osteosynthesis has the potential to change treatment philosophies. The reasons for this are described in this review.
Microscopic heavy-ion theory. Final Report. February 2014-June 2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernst, David J.; Oberacker, Volker E.; Umar, A. Sait
The Vanderbilt nuclear theory group conducts research in the areas of low-energy nuclear reactions and in neutrino oscillations. Specically, we study dynamics of nuclear reactions microscopically, in particular for neutron-rich nuclei which will be accessible with current and future radioactive ion beam facilities. The neutrino work concentrates on constructing computational tools for analyzing neutrino oscillation data. The most important of these is the analysis of the Super K atmospheric data. Our research concentrates on the following topics which are part of the DOE Long-Range Plan: STUDIES OF LOW-ENERGY REACTIONS OF EXOTIC NUCLEI (Professors Umar and Oberacker), including sub-barrier fusion crossmore » sections, capture cross sections for superheavy element production, and nuclear astrophysics applications. Our theory project is strongly connected to experiments at RIB facilities around the world, including NSCL-FRIB (MSU) and ATLAS-CARIBU (Argonne). PHENOMENOLOGY OF NEUTRINO OSCILLATIONS (Prof. Ernst), extracting information from existing neutrino oscillation experiments and proposing possible future experiments in order to better understand the oscillation phenomenon.« less
Lynch, Rod; Pitson, Graham; Ball, David; Claude, Line; Sarrut, David
2013-01-01
To develop a reproducible definition for each mediastinal lymph node station based on the new TNM classification for lung cancer. This paper proposes an atlas using the new international lymph node map used in the seventh edition of the TNM classification for lung cancer. Four radiation oncologists and 1 diagnostic radiologist were involved in the project to put forward a reproducible radiologic description for the lung lymph node stations. The International Association for the Study of Lung Cancer lymph node definitions for stations 1 to 11 have been described and illustrated on axial computed tomographic scan images using a certified radiotherapy planning system. This atlas will assist both diagnostic radiologists and radiation oncologists in accurately defining the lymph node stations on computed tomographic scan in patients diagnosed with lung cancer. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
A four-dimensional motion field atlas of the tongue from tagged and cine magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Prince, Jerry L.; Stone, Maureen; Wedeen, Van J.; El Fakhri, Georges; Woo, Jonghye
2017-02-01
Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.
A Four-dimensional Motion Field Atlas of the Tongue from Tagged and Cine Magnetic Resonance Imaging.
Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Wedeen, Van J; Fakhri, Georges El; Woo, Jonghye
2017-01-01
Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.
2011-11-17
CAPE CANAVERAL, Fla. -- The Atlas V rocket set to launch NASA's Mars Science Laboratory (MSL) mission is illuminated inside the Vertical Integration Facility at Space Launch Complex 41, where employees have gathered to hoist the spacecraft's multi-mission radioisotope thermoelectric generator (MMRTG). The generator will be lifted up to the top of the rocket and installed on the MSL spacecraft, encapsulated within the payload fairing. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat produced by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Heat emitted by the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Dimitri Gerondidakis
2011-11-17
CAPE CANAVERAL, Fla. -- Enclosed in the protective mesh container known as the "gorilla cage," the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is hoisted up beside the Atlas V rocket standing in the Vertical Integration Facility at Space Launch Complex 41. The generator will be installed on the MSL spacecraft, encapsulated within the payload fairing. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat produced by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Heat emitted by the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Dimitri Gerondidakis
Fusion set selection with surrogate metric in multi-atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-02-01
Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.
2016-11-09
A view from high up inside the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. A crane lifts the payload fairing containing NOAA's Geostationary Operational Environmental Satellite (GOES-R) for mating to the United Launch Alliance Atlas V Centaur upper stage. The satellite will launch aboard the Atlas V rocket in November. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
Three-dimensional Talairach-Tournoux brain atlas
NASA Astrophysics Data System (ADS)
Fang, Anthony; Nowinski, Wieslaw L.; Nguyen, Bonnie T.; Bryan, R. Nick
1995-04-01
The Talairach-Tournoux Stereotaxic Atlas of the human brain is a frequently consulted resource in stereotaxic neurosurgery and computer-based neuroradiology. Its primary application lies in the 2-D analysis and interpretation of neurological images. However, for the purpose of the analysis and visualization of shapes and forms, accurate mensuration of volumes, or 3-D models matching, a 3-D representation of the atlas is essential. This paper proposes and describes, along with its difficulties, a 3-D geometric extension of the atlas. We introduce a `zero-potential' surface smoothing technique, along with a space-dependent convolution kernel and space-dependent normalization. The mesh-based atlas structures are hierarchically organized, and anatomically conform to the original atlas. Structures and their constituents can be independently selected and manipulated in real-time within an integrated system. The extended atlas may be navigated by itself, or interactively registered with patient data with the proportional grid system (piecewise linear) transformation. Visualization of the geometric atlas along with patient data gives a remarkable visual `feel' of the biological structures, not usually perceivable to the untrained eyes in conventional 2-D atlas to image analysis.
Itazawa, Tomoko; Tamaki, Yukihisa; Komiyama, Takafumi; Nishimura, Yasumasa; Nakayama, Yuko; Ito, Hiroyuki; Ohde, Yasuhisa; Kusumoto, Masahiko; Sakai, Shuji; Suzuki, Kenji; Watanabe, Hirokazu; Asamura, Hisao
2017-01-01
The purpose of this study was to develop a consensus-based computed tomographic (CT) atlas that defines lymph node stations in radiotherapy for lung cancer based on the lymph node map of the International Association for the Study of Lung Cancer (IASLC). A project group in the Japanese Radiation Oncology Study Group (JROSG) initially prepared a draft of the atlas in which lymph node Stations 1-11 were illustrated on axial CT images. Subsequently, a joint committee of the Japan Lung Cancer Society (JLCS) and the Japanese Society for Radiation Oncology (JASTRO) was formulated to revise this draft. The committee consisted of four radiation oncologists, four thoracic surgeons and three thoracic radiologists. The draft prepared by the JROSG project group was intensively reviewed and discussed at four meetings of the committee over several months. Finally, we proposed definitions for the regional lymph node stations and the consensus-based CT atlas. This atlas was approved by the Board of Directors of JLCS and JASTRO. This resulted in the first official CT atlas for defining regional lymph node stations in radiotherapy for lung cancer authorized by the JLCS and JASTRO. In conclusion, the JLCS-JASTRO consensus-based CT atlas, which conforms to the IASLC lymph node map, was established. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
2015 TRI National Analysis: Toxics Release Inventory Releases at Various Summary Levels
The TRI National Analysis is EPA's annual interpretation of TRI data at various summary levels. It highlights how toxic chemical wastes were managed, where toxic chemicals were released and how the 2015 TRI data compare to data from previous years. This dataset reports US state, county, large aquatic ecosystem, metro/micropolitan statistical area, and facility level statistics from 2015 TRI releases, including information on: number of 2015 TRI facilities in the geographic area and their releases (total, water, air, land); population information, including populations living within 1 mile of TRI facilities (total, minority, in poverty); and Risk Screening Environmental Indicators (RSEI) model related pounds, toxicity-weighted pounds, and RSEI score. The source of administrative boundary data is the 2013 cartographic boundary shapefiles. Location of facilities is provided by EPA's Facility Registry Service (FRS). Large Aquatic Ecosystems boundaries were dissolved from the hydrologic unit boundaries and codes for the United States, Puerto Rico, and the U.S. Virgin Islands. It was revised for inclusion in the National Atlas of the United States of America (November 2002), and updated to match the streams file created by the USGS National Mapping Division (NMD) for the National Atlas of the United States of America.
A Study of ATLAS Grid Performance for Distributed Analysis
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Fine, Valery; Wenaus, Torre
2012-12-01
In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.
ATLAS, an integrated structural analysis and design system. Volume 2: System design document
NASA Technical Reports Server (NTRS)
Erickson, W. J. (Editor)
1979-01-01
ATLAS is a structural analysis and design system, operational on the Control Data Corporation 6600/CYBER computers. The overall system design, the design of the individual program modules, and the routines in the ATLAS system library are described. The overall design is discussed in terms of system architecture, executive function, data base structure, user program interfaces and operational procedures. The program module sections include detailed code description, common block usage and random access file usage. The description of the ATLAS program library includes all information needed to use these general purpose routines.
The Common Cryogenic Test Facility for the ATLAS Barrel and End-Cap Toroid Magnets
NASA Astrophysics Data System (ADS)
Delruelle, N.; Haug, F.; Junker, S.; Passardi, G.; Pengo, R.; Pirotte, O.
2004-06-01
The large ATLAS toroidal superconducting magnet made of the Barrel and two End-Caps needs extensive testing at the surface of the individual components prior to their final assembly into the underground cavern of LHC. A cryogenic test facility specifically designed for cooling sequentially the eight coils making the Barrel Toroid (BT) has been fully commissioned and is now ready for final acceptance of these magnets. This facility, originally designed for testing individually the 46 tons BT coils, will be upgraded to allow the acceptance tests of the two End-Caps, each of them having a 160 tons cold mass. The integrated system mainly comprises a 1.2 kW@4.5 K refrigerator, a 10 kW liquid-nitrogen precooler, two cryostats housing liquid helium centrifugal pumps of respectively 80 g/s and 600 g/s nominal flow and specific instrumentation to measure the thermal performances of the magnets. This paper describes the overall facility with particular emphasis to the cryogenic features adopted to match the specific requirements of the magnets in the various operating scenarios.
NASA Astrophysics Data System (ADS)
Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L.
2017-02-01
Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multiatlas- based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
Lung lobe segmentation based on statistical atlas and graph cuts
NASA Astrophysics Data System (ADS)
Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2012-03-01
This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.
TDRS-M Atlas V 1st Stage Erection Launch Vehicle on Stand
2017-07-12
A United Launch Alliance Atlas V first stage is lifted at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The rocket is scheduled to launch the Tracking and Data Relay Satellite, TDRS-M. It will be the latest spacecraft destined for the agency's constellation of communications satellites that allows nearly continuous contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. Liftoff atop the ULA Atlas V rocket is scheduled to take place from Cape Canaveral's Space Launch Complex 41 on Aug. 3, 2017 at 9:02 a.m. EDT.
Evaluation of atlas-based auto-segmentation software in prostate cancer patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenham, Stuart, E-mail: stuart.greenham@ncahs.health.nsw.gov.au; Dean, Jenna; Fu, Cheuk Kuen Kenneth
2014-09-15
The performance and limitations of an atlas-based auto-segmentation software package (ABAS; Elekta Inc.) was evaluated using male pelvic anatomy as the area of interest. Contours from 10 prostate patients were selected to create atlases in ABAS. The contoured regions of interest were created manually to align with published guidelines and included the prostate, bladder, rectum, femoral heads and external patient contour. Twenty-four clinically treated prostate patients were auto-contoured using a randomised selection of two, four, six, eight or ten atlases. The concordance between the manually drawn and computer-generated contours were evaluated statistically using Pearson's product–moment correlation coefficient (r) and clinicallymore » in a validated qualitative evaluation. In the latter evaluation, six radiation therapists classified the degree of agreement for each structure using seven clinically appropriate categories. The ABAS software generated clinically acceptable contours for the bladder, rectum, femoral heads and external patient contour. For these structures, ABAS-generated volumes were highly correlated with ‘as treated’ volumes, manually drawn; for four atlases, for example, bladder r = 0.988 (P < 0.001), rectum r = 0.739 (P < 0.001) and left femoral head r = 0.560 (P < 0.001). Poorest results were seen for the prostate (r = 0.401, P < 0.05) (four atlases); however this was attributed to the comparison prostate volume being contoured on magnetic resonance imaging (MRI) rather than computed tomography (CT) data. For all structures, increasing the number of atlases did not consistently improve accuracy. ABAS-generated contours are clinically useful for a range of structures in the male pelvis. Clinically appropriate volumes were created, but editing of some contours was inevitably required. The ideal number of atlases to improve generated automatic contours is yet to be determined.« less
Construction of 4D high-definition cortical surface atlases of infants: Methods and applications.
Li, Gang; Wang, Li; Shi, Feng; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-10-01
In neuroimaging, cortical surface atlases play a fundamental role for spatial normalization, analysis, visualization, and comparison of results across individuals and different studies. However, existing cortical surface atlases created for adults are not suitable for infant brains during the first two postnatal years, which is the most dynamic period of postnatal structural and functional development of the highly-folded cerebral cortex. Therefore, spatiotemporal cortical surface atlases for infant brains are highly desired yet still lacking for accurate mapping of early dynamic brain development. To bridge this significant gap, leveraging our infant-dedicated computational pipeline for cortical surface-based analysis and the unique longitudinal infant MRI dataset acquired in our research center, in this paper, we construct the first spatiotemporal (4D) high-definition cortical surface atlases for the dynamic developing infant cortical structures at seven time points, including 1, 3, 6, 9, 12, 18, and 24 months of age, based on 202 serial MRI scans from 35 healthy infants. For this purpose, we develop a novel method to ensure the longitudinal consistency and unbiasedness to any specific subject and age in our 4D infant cortical surface atlases. Specifically, we first compute the within-subject mean cortical folding by unbiased groupwise registration of longitudinal cortical surfaces of each infant. Then we establish longitudinally-consistent and unbiased inter-subject cortical correspondences by groupwise registration of the geometric features of within-subject mean cortical folding across all infants. Our 4D surface atlases capture both longitudinally-consistent dynamic mean shape changes and the individual variability of cortical folding during early brain development. Experimental results on two independent infant MRI datasets show that using our 4D infant cortical surface atlases as templates leads to significantly improved accuracy for spatial normalization of cortical surfaces across infant individuals, in comparison to the infant surface atlases constructed without longitudinal consistency and also the FreeSurfer adult surface atlas. Moreover, based on our 4D infant surface atlases, for the first time, we reveal the spatially-detailed, region-specific correlation patterns of the dynamic cortical developmental trajectories between different cortical regions during early brain development. Copyright © 2015 Elsevier B.V. All rights reserved.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranken, D.; George, J.
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
PD2P: PanDA Dynamic Data Placement for ATLAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maeno, T.; De, K.; Panitkin, S.
2012-12-13
The PanDA (Production and Distributed Analysis) system plays a key role in the ATLAS distributed computing infrastructure. PanDA is the ATLAS workload management system for processing all Monte-Carlo (MC) simulation and data reprocessing jobs in addition to user and group analysis jobs. The PanDA Dynamic Data Placement (PD2P) system has been developed to cope with difficulties of data placement for ATLAS. We will describe the design of the new system, its performance during the past year of data taking, dramatic improvements it has brought about in the efficient use of storage and processing resources, and plans for the future.
Advanced technologies for scalable ATLAS conditions database access on the grid
NASA Astrophysics Data System (ADS)
Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.
2010-04-01
During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.
Development, deployment and operations of ATLAS databases
NASA Astrophysics Data System (ADS)
Vaniachine, A. V.; Schmitt, J. G. v. d.
2008-07-01
In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.
MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera
NASA Astrophysics Data System (ADS)
Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.
2012-10-01
This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.
MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.
Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F
2012-10-07
This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.
Pc as Physics Computer for Lhc ?
NASA Astrophysics Data System (ADS)
Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.
In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.
Improved ATLAS HammerCloud Monitoring for Local Site Administration
NASA Astrophysics Data System (ADS)
Böhler, M.; Elmsheuser, J.; Hönig, F.; Legger, F.; Mancinelli, V.; Sciacca, G.
2015-12-01
Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimizations of the HammerCloud web interface and gives an overview of typical use cases.
ATLAS, an integrated structural analysis and design system. Volume 5: System demonstration problems
NASA Technical Reports Server (NTRS)
Samuel, R. A. (Editor)
1979-01-01
One of a series of documents describing the ATLAS System for structural analysis and design is presented. A set of problems is described that demonstrate the various analysis and design capabilities of the ATLAS System proper as well as capabilities available by means of interfaces with other computer programs. Input data and results for each demonstration problem are discussed. Results are compared to theoretical solutions or experimental data where possible. Listings of all input data are included.
Scaling up ATLAS Event Service to production levels on opportunistic computing platforms
NASA Astrophysics Data System (ADS)
Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration
2016-10-01
Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.
GOES-R Atlas V Solid Rocket Motor (SRM) Lift and Mate
2016-10-27
A United Launch Alliance (ULA) technician inspects the solid rocket motor for the ULA Atlas V rocket on its transporter near the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The solid rocket motor will be lifted and mated to the rocket in preparation for the launch of NOAA's Geostationary Operational Environmental Satellite (GOES-R) this month. GOES-R is the first satellite in a series of next-generation NOAA GOES Satellites.
InSight Atlas V Fairing Arrival, Offload, and Unbagging
2018-01-31
The United Launch Alliance (ULA) payload fairing for NASA's upcoming Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, or InSight, mission to land on Mars has just arrived at the Astrotech facility at Vandenberg Air Force Base in California. InSight is the first mission to explore the Red Planet's deep interior. It will investigate processes that shaped the rocky planets of the inner solar system including Earth. Liftoff atop a ULA Atlas V rocket is scheduled for May 5, 2018.
InSight Atlas V Fairing Arrival, Offload, and Unbagging
2018-01-31
In the Astrotech facility at Vandenberg Air Force Base in California, technicians remove protective wrapping from the United Launch Alliance (ULA) payload fairing for NASA's upcoming Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, or InSight, spacecraft designed to land on Mars. InSight is the first mission to explore the Red Planet's deep interior. It will investigate processes that shaped the rocky planets of the inner solar system including Earth. Liftoff atop a ULA Atlas V rocket is scheduled for May 5, 2018.
Glance Information System for ATLAS Management
NASA Astrophysics Data System (ADS)
Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration
2011-12-01
ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.
Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom
2015-12-23
The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy. In this study, the optimal number of selected atlases used was six, but for definitive conclusions about the optimal number of atlases and to improve the autosegmentation accuracy for clinical use, more atlases need to be included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munson, C.P.; Benage, J.F. Jr.; Taylor, A.J.
Atlas is a high current ({approximately} 30 MA peak, with a current risetime {approximately} 4.5 {micro}sec), high energy (E{sub stored} = 24 MJ, E{sub load} = 3--6 MJ), pulsed power facility which is being constructed at Los Alamos National Laboratory with a scheduled completion date in the year 2000. When operational, this facility will provide a platform for experiments in high pressure shocks (> 20 Mbar), adiabatic compression ({rho}/{rho}{sub 0} > 5, P > 10 Mbar), high magnetic fields ({approximately} 2,000 T), high strain and strain rates ({var_epsilon} > 200%, d{var_epsilon}/dt {approximately} 10{sup 4} to 10{sup 6} s{sup {minus}1}), hydrodynamicmore » instabilities of materials in turbulent regimes, magnetized target fusion, equation of state, and strongly coupled plasmas. For the strongly coupled plasma experiments, an auxiliary capacitor bank will be used to generate a moderate density (< 0.1 solid), relatively cold ({approximately} 1 eV) plasma by ohmic heating of a conducting material of interest such as titanium. This stargate plasma will be compressed against a central column containing diagnostic instrumentation by a cylindrical conducting liner that is driven radially inward by current from the main Atlas capacitor bank. The plasma is predicted to reach densities of {approximately} 1.1 times solid, achieve ion and electron temperatures of {approximately} 10 eV, and pressures of {approximately} 4--5 Mbar. This is a density/temperature regime which is expected to experience strong coupling, but only partial degeneracy. X-ray radiography is planned for measurements of the material density at discrete times during the experiments; diamond Raman measurements are anticipated for determination of the pressure. In addition, a neutron resonance spectroscopic technique is being evaluated for possible determination of the temperature (through low percentage doping of the titanium with a suitable resonant material). Initial target plasma formation experiments are being planned on an existing pulsed power facility at LANL and will be completed before the start of operation of Atlas.« less
2009-04-27
CAPE CANAVERAL, Fla. –– At the Vertical Integration Facility on Cape Canaveral Air Force Station's Launch Complex 41, the Atlas V first stage is being raised to a vertical position. The Atlas will be lifted into the VIF. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– At the Vertical Integration Facility on Cape Canaveral Air Force Station's Launch Complex 41, cranes are attached to the Atlas V first stage to raise it to vertical. The Atlas will be lifted into the VIF. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– At the Vertical Integration Facility on Cape Canaveral Air Force Station's Launch Complex 41, the Atlas V first stage is being raised to a vertical position. The Atlas will be lifted into the VIF. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2012-07-13
CAPE CANAVERAL, Fla. - At Launch Complex 41 at Cape Canaveral Air Force Station in Florida, the first stage of the United Launch Alliance Atlas V rocket has been moved into the Vertical Integration Facility. The Atlas V is being prepared for the Radiation Belt Storm Probes, or RBSP, mission. NASA’s RBSP mission will help us understand the sun’s influence on Earth and near-Earth space by studying the Earth’s radiation belts on various scales of space and time. RBSP will begin its mission of exploration of Earth’s Van Allen radiation belts and the extremes of space weather after its launch aboard an Atlas V rocket. Launch is targeted for Aug. 23. For more information, visit http://www.nasa.gov/rbsp. Photo credit: NASA/Cory Huston
2012-07-13
CAPE CANAVERAL, Fla. - At Launch Complex 41 at Cape Canaveral Air Force Station in Florida, the first stage of the United Launch Alliance Atlas V rocket has been moved into the Vertical Integration Facility. The Atlas V is being prepared for the Radiation Belt Storm Probes, or RBSP, mission. NASA’s RBSP mission will help us understand the sun’s influence on Earth and near-Earth space by studying the Earth’s radiation belts on various scales of space and time. RBSP will begin its mission of exploration of Earth’s Van Allen radiation belts and the extremes of space weather after its launch aboard an Atlas V rocket. Launch is targeted for Aug. 23. For more information, visit http://www.nasa.gov/rbsp. Photo credit: NASA/Cory Huston
ATPP: A Pipeline for Automatic Tractography-Based Brain Parcellation
Li, Hai; Fan, Lingzhong; Zhuo, Junjie; Wang, Jiaojian; Zhang, Yu; Yang, Zhengyi; Jiang, Tianzi
2017-01-01
There is a longstanding effort to parcellate brain into areas based on micro-structural, macro-structural, or connectional features, forming various brain atlases. Among them, connectivity-based parcellation gains much emphasis, especially with the considerable progress of multimodal magnetic resonance imaging in the past two decades. The Brainnetome Atlas published recently is such an atlas that follows the framework of connectivity-based parcellation. However, in the construction of the atlas, the deluge of high resolution multimodal MRI data and time-consuming computation poses challenges and there is still short of publically available tools dedicated to parcellation. In this paper, we present an integrated open source pipeline (https://www.nitrc.org/projects/atpp), named Automatic Tractography-based Parcellation Pipeline (ATPP) to realize the framework of parcellation with automatic processing and massive parallel computing. ATPP is developed to have a powerful and flexible command line version, taking multiple regions of interest as input, as well as a user-friendly graphical user interface version for parcellating single region of interest. We demonstrate the two versions by parcellating two brain regions, left precentral gyrus and middle frontal gyrus, on two independent datasets. In addition, ATPP has been successfully utilized and fully validated in a variety of brain regions and the human Brainnetome Atlas, showing the capacity to greatly facilitate brain parcellation. PMID:28611620
Automated method for structural segmentation of nasal airways based on cone beam computed tomography
NASA Astrophysics Data System (ADS)
Tymkovych, Maksym Yu.; Avrunin, Oleg G.; Paliy, Victor G.; Filzow, Maksim; Gryshkov, Oleksandr; Glasmacher, Birgit; Omiotek, Zbigniew; DzierŻak, RóŻa; Smailova, Saule; Kozbekova, Ainur
2017-08-01
The work is dedicated to the segmentation problem of human nasal airways using Cone Beam Computed Tomography. During research, we propose a specialized approach of structured segmentation of nasal airways. That approach use spatial information, symmetrisation of the structures. The proposed stages can be used for construction a virtual three dimensional model of nasal airways and for production full-scale personalized atlases. During research we build the virtual model of nasal airways, which can be used for construction specialized medical atlases and aerodynamics researches.
Charting molecular free-energy landscapes with an atlas of collective variables
NASA Astrophysics Data System (ADS)
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2016-11-01
Collective variables (CVs) are a fundamental tool to understand molecular flexibility, to compute free energy landscapes, and to enhance sampling in molecular dynamics simulations. However, identifying suitable CVs is challenging, and is increasingly addressed with systematic data-driven manifold learning techniques. Here, we provide a flexible framework to model molecular systems in terms of a collection of locally valid and partially overlapping CVs: an atlas of CVs. The specific motivation for such a framework is to enhance the applicability and robustness of CVs based on manifold learning methods, which fail in the presence of periodicities in the underlying conformational manifold. More generally, using an atlas of CVs rather than a single chart may help us better describe different regions of conformational space. We develop the statistical mechanics foundation for our multi-chart description and propose an algorithmic implementation. The resulting atlas of data-based CVs are then used to enhance sampling and compute free energy surfaces in two model systems, alanine dipeptide and β-D-glucopyranose, whose conformational manifolds have toroidal and spherical topologies.
New separators at the ATLAS facility
NASA Astrophysics Data System (ADS)
Back, Birger; Agfa Collaboration; Airis Team
2015-10-01
Two new separators are being built for the ATLAS facility. The Argonne Gas-Filled Analyzer (AGFA) is a novel design consisting of a single quadrupole and a multipole magnet that has both dipole and quadrupole field components. The design allows for placing Gammasphere at the target position while providing a solid angle of ~ 22 msr for capturing recoil products emitted at zero degrees. This arrangement enables studies of prompt gamma ray emission from weakly populated trans-fermium nuclei and those near the doubly-magic N = Z = 50 shell closure measured in coincidence with the recoils registered by AGFA. The Argonne In-flight Radioactive Ion Separator (AIRIS) is a magnetic chicane that will be installed immediately downstream of the last ATLAS cryostat and serve to separate radioactive ion beams generated in flight at an upstream high intensity production target. These beams will be further purified by a downstream RF sweeper and transported into a number of target stations including HELIOS, the Enge spectrograph, the FMA and Gammasphere. This talk will present the status of these two projects. This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Applying graph theory to protein structures: an atlas of coiled coils.
Heal, Jack W; Bartlett, Gail J; Wood, Christopher W; Thomson, Andrew R; Woolfson, Derek N
2018-05-02
To understand protein structure, folding and function fully and to design proteins de novo reliably, we must learn from natural protein structures that have been characterised experimentally. The number of protein structures available is large and growing exponentially, which makes this task challenging. Indeed, computational resources are becoming increasingly important for classifying and analysing this resource. Here, we use tools from graph theory to define an atlas classification scheme for automatically categorising certain protein substructures. Focusing on the α-helical coiled coils, which are ubiquitous protein-structure and protein-protein interaction motifs, we present a suite of computational resources designed for analysing these assemblies. iSOCKET enables interactive analysis of side-chain packing within proteins to identify coiled coils automatically and with considerable user control. Applying a graph theory-based atlas classification scheme to structures identified by iSOCKET gives the Atlas of Coiled Coils, a fully automated, updated overview of extant coiled coils. The utility of this approach is illustrated with the first formal classification of an emerging subclass of coiled coils called α-helical barrels. Furthermore, in the Atlas, the known coiled-coil universe is presented alongside a partial enumeration of the 'dark matter' of coiled-coil structures; i.e., those coiled-coil architectures that are theoretically possible but have not been observed to date, and thus present defined targets for protein design. iSOCKET is available as part of the open-source GitHub repository associated with this work (https://github.com/woolfson-group/isocket). This repository also contains all the data generated when classifying the protein graphs. The Atlas of Coiled Coils is available at: http://coiledcoils.chm.bris.ac.uk/atlas/app.
NASA Astrophysics Data System (ADS)
Fernandez, James Reza; Zhang, Aifeng; Vachon, Linda; Tsao, Sinchai
2008-03-01
Bone age assessment is most commonly performed with the use of the Greulich and Pyle (G&P) book atlas, which was developed in the 1950s. The population of theUnited States is not as homogenous as the Caucasian population in the Greulich and Pyle in the 1950s, especially in the Los Angeles, California area. A digital hand atlas (DHA) based on 1,390 hand images of children of different racial backgrounds (Caucasian, African American, Hispanic, and Asian) aged 0-18 years was collected from Children's Hospital Los Angeles. Statistical analysis discovered significant discrepancies exist between Hispanic and the G&P atlas standard. To validate the usage of DHA as a clinical standard, diagnostic radiologists performed reads on Hispanic pediatric hand and wrist computed radiography images using either the G&P pediatric radiographic atlas or the Children's Hospital Los Angeles Digital Hand Atlas (DHA) as reference. The order in which the atlas is used (G&P followed by DHA or vice versa) for each image was prepared before actual reading begins. Statistical analysis of the results was then performed to determine if a discrepancy exists between the two readings.
One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.
Arabi, Hossein; Zaidi, Habib
2016-10-01
The outcome of a detailed assessment of various strategies for atlas-based whole-body bone segmentation from magnetic resonance imaging (MRI) was exploited to select the optimal parameters and setting, with the aim of proposing a novel one-registration multi-atlas (ORMA) pseudo-CT generation approach. The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them to the target image. The performance characteristics of the proposed method were evaluated and compared with conventional atlas-based attenuation map generation strategies (direct registration of the entire atlas images followed by voxel-wise weighting (VWW) and arithmetic averaging atlas fusion). To this end, four different positron emission tomography (PET) attenuation maps were generated via arithmetic averaging and VWW scheme using both direct registration and ORMA approaches as well as the 3-class attenuation map obtained from the Philips Ingenuity TF PET/MRI scanner commonly used in the clinical setting. The evaluation was performed based on the accuracy of extracted whole-body bones by the different attenuation maps and by quantitative analysis of resulting PET images compared to CT-based attenuation-corrected PET images serving as reference. The comparison of validation metrics regarding the accuracy of extracted bone using the different techniques demonstrated the superiority of the VWW atlas fusion algorithm achieving a Dice similarity measure of 0.82 ± 0.04 compared to arithmetic averaging atlas fusion (0.60 ± 0.02), which uses conventional direct registration. Application of the ORMA approach modestly compromised the accuracy, yielding a Dice similarity measure of 0.76 ± 0.05 for ORMA-VWW and 0.55 ± 0.03 for ORMA-averaging. The results of quantitative PET analysis followed the same trend with less significant differences in terms of SUV bias, whereas massive improvements were observed compared to PET images corrected for attenuation using the 3-class attenuation map. The maximum absolute bias achieved by VWW and VWW-ORMA methods was 06.4 ± 5.5 in the lung and 07.9 ± 4.8 in the bone, respectively. The proposed algorithm is capable of generating decent attenuation maps. The quantitative analysis revealed a good correlation between PET images corrected for attenuation using the proposed pseudo-CT generation approach and the corresponding CT images. The computational time is reduced by a factor of 1/N at the expense of a modest decrease in quantitative accuracy, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance.
Interactive 3D visualization tools for stereotactic atlas-based functional neurosurgery
NASA Astrophysics Data System (ADS)
St. Jean, Philippe; Kasrai, Reza; Clonda, Diego; Sadikot, Abbas F.; Evans, Alan C.; Peters, Terence M.
1998-06-01
Many of the critical basal ganglia structures are not distinguishable on anatomical magnetic resonance imaging (MRI) scans, even though they differ in functionality. In order to provide the neurosurgeon with this missing information, a deformable volumetric atlas of the basal ganglia has been created from the Shaltenbrand and Wahren atlas of cryogenic slices. The volumetric atlas can be non-linearly deformed to an individual patient's MRI. To facilitate the clinical use of the atlas, a visualization platform has been developed for pre- and intra-operative use which permits manipulation of the merged atlas and MRI data sets in two- and three-dimensional views. The platform includes graphical tools which allow the visualization of projections of the leukotome and other surgical tools with respect to the atlas data, as well as pre- registered images from any other imaging modality. In addition, a graphical interface has been designed to create custom virtual lesions using computer models of neurosurgical tools for intra-operative planning. To date 17 clinical cases have been successfully performed using the described system.
Zaffino, Paolo; Ciardo, Delia; Raudaschl, Patrik; Fritscher, Karl; Ricotti, Rosalinda; Alterio, Daniela; Marvaso, Giulia; Fodor, Cristiana; Baroni, Guido; Amato, Francesco; Orecchia, Roberto; Jereczek-Fossa, Barbara Alicja; Sharp, Gregory C; Spadea, Maria Francesca
2018-05-22
Multi Atlas Based Segmentation (MABS) uses a database of atlas images, and an atlas selection process is used to choose an atlas subset for registration and voting. In the current state of the art, atlases are chosen according to a similarity criterion between the target subject and each atlas in the database. In this paper, we propose a new concept for atlas selection that relies on selecting the best performing group of atlases rather than the group of highest scoring individual atlases. Experiments were performed using CT images of 50 patients, with contours of brainstem and parotid glands. The dataset was randomly split in 2 groups: 20 volumes were used as an atlas database and 30 served as target subjects for testing. Classic oracle group selection, where atlases are chosen by the highest Dice Similarity Coefficient (DSC) with the target, was performed. This was compared to oracle Group selection, where all the combinations of atlas subgroups were considered and scored by computing DSC with the target subject. Subsequently, Convolutional Neural Networks (CNNs) were designed to predict the best group of atlases. The results were compared also with the selection strategy based on Normalized Mutual Information (NMI). Oracle group was proved to be significantly better that classic oracle selection (p<10-5). Atlas group selection led to a median±interquartile DSC of 0.740±0.084, 0.718±0.086 and 0.670±0.097 for brainstem and left/right parotid glands respectively, outperforming NMI selection 0.676±0.113, 0.632±0.104 and 0.606±0.118 (p<0.001) as well as classic oracle selection. The implemented methodology is a proof of principle that selecting the atlases by considering the performance of the entire group of atlases instead of each single atlas leads to higher segmentation accuracy, being even better then current oracle strategy. This finding opens a new discussion about the most appropriate atlas selection criterion for MABS. © 2018 Institute of Physics and Engineering in Medicine.
A multipurpose computing center with distributed resources
NASA Astrophysics Data System (ADS)
Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.
2017-10-01
The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.
ATLAS user analysis on private cloud resources at GoeGrid
NASA Astrophysics Data System (ADS)
Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.
2015-12-01
User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.
Job optimization in ATLAS TAG-based distributed analysis
NASA Astrophysics Data System (ADS)
Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.
2010-04-01
The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.
Large scale digital atlases in neuroscience
NASA Astrophysics Data System (ADS)
Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.
2014-03-01
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases
Zaslavsky, Ilya; Baldock, Richard A.; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project. PMID:25309417
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.
Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.
ERIC Educational Resources Information Center
Pareja-Lora, Antonio; Arús-Hita, Jorge; Read, Timothy; Rodríguez-Arancón, Pilar; Calle-Martínez, Cristina; Pomposo, Lourdes; Martín-Monje, Elena; Bárcena, Elena
2013-01-01
In this short paper, we present some initial work on Mobile Assisted Language Learning (MALL) undertaken by the ATLAS research group. ATLAS embraced this multidisciplinary field cutting across Mobile Learning and Computer Assisted Language Learning (CALL) as a natural step in their quest to find learning formulas for professional English that…
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
Computation of a high-resolution MRI 3D stereotaxic atlas of the sheep brain.
Ella, Arsène; Delgadillo, José A; Chemineau, Philippe; Keller, Matthieu
2017-02-15
The sheep model was first used in the fields of animal reproduction and veterinary sciences and then was utilized in fundamental and preclinical studies. For more than a decade, magnetic resonance (MR) studies performed on this model have been increasingly reported, especially in the field of neuroscience. To contribute to MR translational neuroscience research, a brain template and an atlas are necessary. We have recently generated the first complete T1-weighted (T1W) and T2W MR population average images (or templates) of in vivo sheep brains. In this study, we 1) defined a 3D stereotaxic coordinate system for previously established in vivo population average templates; 2) used deformation fields obtained during optimized nonlinear registrations to compute nonlinear tissues or prior probability maps (nlTPMs) of cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) tissues; 3) delineated 25 external and 28 internal sheep brain structures by segmenting both templates and nlTPMs; and 4) annotated and labeled these structures using an existing histological atlas. We built a quality high-resolution 3D atlas of average in vivo sheep brains linked to a reference stereotaxic space. The atlas and nlTPMs, associated with previously computed T1W and T2W in vivo sheep brain templates and nlTPMs, provide a complete set of imaging space that are able to be imported into other imaging software programs and could be used as standardized tools for neuroimaging studies or other neuroscience methods, such as image registration, image segmentation, identification of brain structures, implementation of recording devices, or neuronavigation. J. Comp. Neurol. 525:676-692, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Associated production of a quarkonium and a Z boson at one loop in a quark-hadron-duality approach
NASA Astrophysics Data System (ADS)
Lansberg, Jean-Philippe; Shao, Hua-Sheng
2016-10-01
In view of the large discrepancy about the associated production of a prompt J/ψ and a Z boson between the ATLAS data at √{s}=8 TeV and theoretical predictions for Single Parton Scattering (SPS) contributions, we perform an evaluation of the corresponding cross section at one loop accuracy (Next-to-Leading Order, NLO) in a quark-hadron-duality approach, also known as the Colour-Evaporation Model (CEM). This work is motivated by (i) the extremely disparate predictions based on the existing NRQCD fits conjugated with the absence of a full NLO NRQCD computation and (ii) the fact that we believe that such an evaluation provides a likely upper limit of the SPS cross section. In addition to these theory improvements, we argue that the ATLAS estimation of the Double Parton Scattering (DPS) yield may be underestimated by a factor as large as 3 which then reduces the size of the SPS yield extracted from the ATLAS data. Our NLO SPS evaluation also allows us to set an upper limit on σ eff driving the size of the DPS yield. Overall, the discrepancy between theory and experiment may be smaller than expected, which calls for further analyses by ATLAS and CMS, for which we provide predictions, and for full NLO computations in other models. As an interesting side product of our analysis, we have performed the first NLO computation of dσ /dP T for prompt single- J/ψ production in the CEM from which we have fit the CEM non-pertubative parameter at NLO using the most recent ATLAS data.
2017-03-17
United Launch Alliance (ULA) technicians monitor the progress as the payload fairing containing the Orbital ATK Cygnus pressurized cargo module is lowered onto the Centaur upper stage, or second stage, of the ULA Atlas V rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Orbital ATK CRS-7 commercial resupply services mission to the International Space Station is scheduled to launch atop the Atlas V from pad 41. Cygnus will deliver 7,600 pounds of supplies, equipment and scientific research materials to the space station.
TDRS-M Spacecraft Encapsulation
2017-08-02
Inside the Astrotech facility in Titusville, Florida, NASA's Tracking and Data Relay Satellite, TDRS-M, is encapsulated into ULA's Atlas V payload fairing. TDRS-M is the latest spacecraft destined for the agency's constellation of communications satellites that allows nearly continuous contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. Liftoff atop a United Launch Alliance Atlas V rocket is scheduled to take place from Space Launch Complex 41 at Cape Canaveral Air Force Station at 8:03 a.m. EDT Aug. 18, 2017.
GOES-S Atlas V Centaur Stage OVI
2018-02-08
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a Centaur upper stage is mated to a United Launch Alliance Atlas V rocket that will boost NOAA's Geostationary Operational Environmental Satellite-S, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First Stage Booster Lift to Vertical On Stand (LV
2018-01-31
A crane lifts a United Launch Alliance Atlas V first stage into the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The rocket will be positioned on its launcher to boost the Geostationary Operational Environmental Satellite, or GOES-S. It will be the second in a series of four advanced geostationary weather satellites and will significantly improve the detection and observation of environmental phenomena that directly affect public safety. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First Stage Booster Lift to Vertical On Stand (LV
2018-01-31
A crane lifts a United Launch Alliance Atlas V first stage at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The rocket will be positioned on its launcher to boost the Geostationary Operational Environmental Satellite, or GOES-S. It will be the second in a series of four advanced geostationary weather satellites and will significantly improve the detection and observation of environmental phenomena that directly affect public safety. GOES-S is slated to launch March 1, 2018.
2017-03-17
The payload fairing containing the Orbital ATK Cygnus pressurized cargo module is lifted by crane at the United Launch Alliance (ULA) Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The payload will be hoisted up and mated to the ULA Atlas V rocket. The Orbital ATK CRS-7 commercial resupply services mission to the International Space Station is scheduled to launch atop the Atlas V from pad 41. Cygnus will deliver 7,600 pounds of supplies, equipment and scientific research materials to the space station.
Atlas_V_OA-7_Payload_Mate_to_Booster
2017-03-17
The payload fairing containing the Orbital ATK Cygnus pressurized cargo module is lifted and mated onto the Centaur upper stage, or second stage, of the United Launch Alliance (ULA) rocket in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Orbital ATK CRS-7 commercial resupply services mission to the International Space Station is scheduled to launch atop the Atlas V from pad 41. Cygnus will deliver 7,600 pounds of supplies, equipment and scientific research materials to the space station.
2017-03-17
The payload fairing containing the Orbital ATK Cygnus pressurized cargo module is hoisted up by crane at the United Launch Alliance (ULA) Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The payload will be mated to the ULA Atlas V rocket. The Orbital ATK CRS-7 commercial resupply services mission to the International Space Station is scheduled to launch atop the Atlas V from pad 41. Cygnus will deliver 7,600 pounds of supplies, equipment and scientific research materials to the space station.
Commercial Applications Multispectral Sensor System
NASA Technical Reports Server (NTRS)
Birk, Ronald J.; Spiering, Bruce
1993-01-01
NASA's Office of Commercial Programs is funding a multispectral sensor system to be used in the development of remote sensing applications. The Airborne Terrestrial Applications Sensor (ATLAS) is designed to provide versatility in acquiring spectral and spatial information. The ATLAS system will be a test bed for the development of specifications for airborne and spaceborne remote sensing instrumentation for dedicated applications. This objective requires spectral coverage from the visible through thermal infrared wavelengths, variable spatial resolution from 2-25 meters; high geometric and geo-location accuracy; on-board radiometric calibration; digital recording; and optimized performance for minimized cost, size, and weight. ATLAS is scheduled to be available in 3rd quarter 1992 for acquisition of data for applications such as environmental monitoring, facilities management, geographic information systems data base development, and mineral exploration.
Lunar Orbiter 4 - Photographic Mission Summary. Volume 1
NASA Technical Reports Server (NTRS)
1968-01-01
Photographic summary report of Lunar Orbiter 4 mission. The fourth of five Lunar Orbiter spacecraft was successfully launched from Launch Complex 13 at the Air Force Eastern Test Range by an Atlas-Agena launch vehicle at 22:25 GMT on May 4, 1967. Tracking data from the Cape Kennedy and Grand Bahama tracking stations were used to control and guide the launch vehicle during Atlas powered flight. The Agena-spacecraft combination was boosted to the proper coast ellipse by the Atlas booster prior to separation. Final maneuvering and acceleration to the velocity required to maintain the 100-nauticalmile- altitude Earth orbit was controlled by the preset on-board Agena computer. In addition, the Agena computer determined the maneuver and engine-burn period required to inject the spacecraft on the cislunar trajectory 20 minutes after launch. Tracking data from the downrange stations and the Johannesburg, South Africa station were used to monitor the boost trajectory.
The benefits of the Atlas of Human Cardiac Anatomy website for the design of cardiac devices.
Spencer, Julianne H; Quill, Jason L; Bateman, Michael G; Eggen, Michael D; Howard, Stephen A; Goff, Ryan P; Howard, Brian T; Quallich, Stephen G; Iaizzo, Paul A
2013-11-01
This paper describes how the Atlas of Human Cardiac Anatomy website can be used to improve cardiac device design throughout the process of development. The Atlas is a free-access website featuring novel images of both functional and fixed human cardiac anatomy from over 250 human heart specimens. This website provides numerous educational tutorials on anatomy, physiology and various imaging modalities. For instance, the 'device tutorial' provides examples of devices that were either present at the time of in vitro reanimation or were subsequently delivered, including leads, catheters, valves, annuloplasty rings and stents. Another section of the website displays 3D models of the vasculature, blood volumes and/or tissue volumes reconstructed from computed tomography and magnetic resonance images of various heart specimens. The website shares library images, video clips and computed tomography and MRI DICOM files in honor of the generous gifts received from donors and their families.
Lunar Orbiter 5. Photographic Mission Summary. Volume 1
NASA Technical Reports Server (NTRS)
1968-01-01
Selected photographs and mission summary of Lunar Orbiter 5. The last of five Lunar Orbiter spacecraft was successfully launched from Launch Complex 13 at the Air Force Eastern Test Range by an Atlas-Agena launch vehicle at 22:33 GMT on August 1, 1967. Tracking data from the Cape Kennedy and Grand Bahama tracking stations were used to control and guide the launch vehicle during Atlas powered flight. The Agena-spacecraft combination was boosted to the proper coast ellipse by the Atlas booster prior to separation. Final maneuvering and acceleration to the velocity required to maintain the 100-nautical-mile-altitude Earth orbit were controlled by the preset on-board Agena computer. In addition, the Agena computer determined the maneuver and engine-bum period required to inject the spacecraft on the cislunar trajectory about 33 minutes after launch. Tracking data from the downrange stations and the Johannesburg, South Africa station were used to monitor the boost trajectory.
Diffeomorphic Sulcal Shape Analysis on the Cortex
Joshi, Shantanu H.; Cabeen, Ryan P.; Joshi, Anand A.; Sun, Bo; Dinov, Ivo; Narr, Katherine L.; Toga, Arthur W.; Woods, Roger P.
2014-01-01
We present a diffeomorphic approach for constructing intrinsic shape atlases of sulci on the human cortex. Sulci are represented as square-root velocity functions of continuous open curves in ℝ3, and their shapes are studied as functional representations of an infinite-dimensional sphere. This spherical manifold has some advantageous properties – it is equipped with a Riemannian metric on the tangent space and facilitates computational analyses and correspondences between sulcal shapes. Sulcal shape mapping is achieved by computing geodesics in the quotient space of shapes modulo scales, translations, rigid rotations and reparameterizations. The resulting sulcal shape atlas preserves important local geometry inherently present in the sample population. The sulcal shape atlas is integrated in a cortical registration framework and exhibits better geometric matching compared to the conventional euclidean method. We demonstrate experimental results for sulcal shape mapping, cortical surface registration, and sulcal classification for two different surface extraction protocols for separate subject populations. PMID:22328177
Lunar Orbiter 3 - Photographic Mission Summary
NASA Technical Reports Server (NTRS)
1968-01-01
Systems performance, lunar photography, and launch operations of Lunar Orbiter 3 photographic mission. The third of five Lunar Orbiter spacecraft was successfully launched from Launch Complex 13 at the Air Force Eastern Test Range by an Atlas-Agena launch vehicle at 01:17 GMT on February 5,1967. Tracking data from the Cape Kennedy and Grand Bahama tracking stations were used to control and guide the launch vehicle during Atlas powered flight. The Agena-spacecraft combination was boosted to the proper coast ellipse by the Atlas booster prior to separation. Final 1 maneuvering and acceleration to the velocity required to maintain the 100-nautical-milealtitude Earth orbit was controlled by the preset on-board Agena computer. In addition, the Agena computer determined the maneuver and engine-burn period required to inject the spacecraft on the cislunar trajectory 20 minutes after launch. Tracking data from the downrange stations and the Johannesburg, South Africa station were used to monitor the entire boost trajectory.
NASA Astrophysics Data System (ADS)
Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.
2013-03-01
Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.
Monitoring of computing resource use of active software releases at ATLAS
NASA Astrophysics Data System (ADS)
Limosani, Antonio; ATLAS Collaboration
2017-10-01
The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.
Localized-atlas-based segmentation of breast MRI in a decision-making framework.
Fooladivanda, Aida; Shokouhi, Shahriar B; Ahmadinejad, Nasrin
2017-03-01
Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.
Hadadi, Noushin; Hafner, Jasmin; Shajkofci, Adrian; Zisaki, Aikaterini; Hatzimanikatis, Vassily
2016-10-21
Because the complexity of metabolism cannot be intuitively understood or analyzed, computational methods are indispensable for studying biochemistry and deepening our understanding of cellular metabolism to promote new discoveries. We used the computational framework BNICE.ch along with cheminformatic tools to assemble the whole theoretical reactome from the known metabolome through expansion of the known biochemistry presented in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We constructed the ATLAS of Biochemistry, a database of all theoretical biochemical reactions based on known biochemical principles and compounds. ATLAS includes more than 130 000 hypothetical enzymatic reactions that connect two or more KEGG metabolites through novel enzymatic reactions that have never been reported to occur in living organisms. Moreover, ATLAS reactions integrate 42% of KEGG metabolites that are not currently present in any KEGG reaction into one or more novel enzymatic reactions. The generated repository of information is organized in a Web-based database ( http://lcsb-databases.epfl.ch/atlas/ ) that allows the user to search for all possible routes from any substrate compound to any product. The resulting pathways involve known and novel enzymatic steps that may indicate unidentified enzymatic activities and provide potential targets for protein engineering. Our approach of introducing novel biochemistry into pathway design and associated databases will be important for synthetic biology and metabolic engineering.
Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid
NASA Astrophysics Data System (ADS)
Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration
2014-06-01
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.
NASA Astrophysics Data System (ADS)
Ohene-Kwofie, Daniel; Otoo, Ekow
2015-10-01
The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.
2011-09-08
CAPE CANAVERAL, Fla. -- The Vertical Integration Facility is reflected in the water standing near the facility at Space Launch Complex 41 on Cape Canaveral Air Force Station following the arrival of the first stage of the Atlas V rocket for NASA's Mars Science Laboratory (MSL) mission. A United Launch Alliance Atlas V-541 configuration will be used to loft MSL into space. Curiosity’s 10 science instruments are designed to search for evidence on whether Mars has had environments favorable to microbial life, including chemical ingredients for life. The unique rover will use a laser to look inside rocks and release its gasses so that the rover’s spectrometer can analyze and send the data back to Earth. MSL is scheduled to launch Nov. 25 with a window extending to Dec. 18 and arrival at Mars Aug. 2012. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Cory Huston
2013-08-21
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, a technician inspects a cell from one of the electricity-producing solar arrays for the Mars Atmosphere and Volatile Evolution, or MAVEN, spacecraft. MAVEN is being prepared for its scheduled launch in November from Cape Canaveral Air Force Station, Fla. atop a United Launch Alliance Atlas V rocket. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. For more information, visit: http://www.nasa.gov/mission_pages/maven/main/index.html Photo credit: NASA/Jim Grossmann MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
2013-08-21
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, a technician repairs a cell from one of the electricity-producing solar arrays for the Mars Atmosphere and Volatile Evolution, or MAVEN, spacecraft. MAVEN is being prepared for its scheduled launch in November from Cape Canaveral Air Force Station, Fla. atop a United Launch Alliance Atlas V rocket. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. For more information, visit: http://www.nasa.gov/mission_pages/maven/main/index.html Photo credit: NASA/Jim Grossmann MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
2013-08-21
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, a technician cleans a cell from one of the electricity-producing solar arrays for the Mars Atmosphere and Volatile Evolution, or MAVEN, spacecraft. MAVEN is being prepared for its scheduled launch in November from Cape Canaveral Air Force Station, Fla. atop a United Launch Alliance Atlas V rocket. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. For more information, visit: http://www.nasa.gov/mission_pages/maven/main/index.html Photo credit: NASA/Jim Grossmann MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
2013-08-21
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, a technician inspects a cell from one of the electricity-producing solar arrays for the Mars Atmosphere and Volatile Evolution, or MAVEN, spacecraft. MAVEN is being prepared for its scheduled launch in November from Cape Canaveral Air Force Station, Fla. atop a United Launch Alliance Atlas V rocket. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. For more information, visit: http://www.nasa.gov/mission_pages/maven/main/index.html Photo credit: NASA/Jim Grossmann MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
Generating patient specific pseudo-CT of the head from MR using atlas-based regression
NASA Astrophysics Data System (ADS)
Sjölund, J.; Forsberg, D.; Andersson, M.; Knutsson, H.
2015-01-01
Radiotherapy planning and attenuation correction of PET images require simulation of radiation transport. The necessary physical properties are typically derived from computed tomography (CT) images, but in some cases, including stereotactic neurosurgery and combined PET/MR imaging, only magnetic resonance (MR) images are available. With these applications in mind, we describe how a realistic, patient-specific, pseudo-CT of the head can be derived from anatomical MR images. We refer to the method as atlas-based regression, because of its similarity to atlas-based segmentation. Given a target MR and an atlas database comprising MR and CT pairs, atlas-based regression works by registering each atlas MR to the target MR, applying the resulting displacement fields to the corresponding atlas CTs and, finally, fusing the deformed atlas CTs into a single pseudo-CT. We use a deformable registration algorithm known as the Morphon and augment it with a certainty mask that allows a tailoring of the influence certain regions are allowed to have on the registration. Moreover, we propose a novel method of fusion, wherein the collection of deformed CTs is iteratively registered to their joint mean and find that the resulting mean CT becomes more similar to the target CT. However, the voxelwise median provided even better results; at least as good as earlier work that required special MR imaging techniques. This makes atlas-based regression a good candidate for clinical use.
Blesa, Manuel; Serag, Ahmed; Wilkinson, Alastair G; Anblagan, Devasuda; Telford, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Macnaught, Gillian; Semple, Scott I; Bastin, Mark E; Boardman, James P
2016-01-01
Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39(+5) weeks, range 37(+2)-41(+6)). An adult brain atlas (SRI24/TZO) was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database), with the final atlas (Edinburgh Neonatal Atlas, ENA33) constructed using the Symmetric Group Normalization (SyGN) method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modeling brain growth during development.
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R
2015-11-21
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
NASA Astrophysics Data System (ADS)
Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the
2015-11-01
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
NASA Astrophysics Data System (ADS)
Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.
2015-12-01
The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.
FlyAtlas: database of gene expression in the tissues of Drosophila melanogaster
Robinson, Scott W.; Herzyk, Pawel; Dow, Julian A. T.; Leader, David P.
2013-01-01
The FlyAtlas resource contains data on the expression of the genes of Drosophila melanogaster in different tissues (currently 25—17 adult and 8 larval) obtained by hybridization of messenger RNA to Affymetrix Drosophila Genome 2 microarrays. The microarray probe sets cover 13 250 Drosophila genes, detecting 12 533 in an unambiguous manner. The data underlying the original web application (http://flyatlas.org) have been restructured into a relational database and a Java servlet written to provide a new web interface, FlyAtlas 2 (http://flyatlas.gla.ac.uk/), which allows several additional queries. Users can retrieve data for individual genes or for groups of genes belonging to the same or related ontological categories. Assistance in selecting valid search terms is provided by an Ajax ‘autosuggest’ facility that polls the database as the user types. Searches can also focus on particular tissues, and data can be retrieved for the most highly expressed genes, for genes of a particular category with above-average expression or for genes with the greatest difference in expression between the larval and adult stages. A novel facility allows the database to be queried with a specific gene to find other genes with a similar pattern of expression across the different tissues. PMID:23203866
FlyAtlas: database of gene expression in the tissues of Drosophila melanogaster.
Robinson, Scott W; Herzyk, Pawel; Dow, Julian A T; Leader, David P
2013-01-01
The FlyAtlas resource contains data on the expression of the genes of Drosophila melanogaster in different tissues (currently 25-17 adult and 8 larval) obtained by hybridization of messenger RNA to Affymetrix Drosophila Genome 2 microarrays. The microarray probe sets cover 13,250 Drosophila genes, detecting 12,533 in an unambiguous manner. The data underlying the original web application (http://flyatlas.org) have been restructured into a relational database and a Java servlet written to provide a new web interface, FlyAtlas 2 (http://flyatlas.gla.ac.uk/), which allows several additional queries. Users can retrieve data for individual genes or for groups of genes belonging to the same or related ontological categories. Assistance in selecting valid search terms is provided by an Ajax 'autosuggest' facility that polls the database as the user types. Searches can also focus on particular tissues, and data can be retrieved for the most highly expressed genes, for genes of a particular category with above-average expression or for genes with the greatest difference in expression between the larval and adult stages. A novel facility allows the database to be queried with a specific gene to find other genes with a similar pattern of expression across the different tissues.
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki
2017-12-01
The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.
Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline
Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin
2014-01-01
Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures. PMID:24567717
Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David
2015-01-01
Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778
NASA Astrophysics Data System (ADS)
McKee, Shawn;
2017-10-01
Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage
SU-F-T-405: Development of a Rapid Cardiac Contouring Tool Using Landmark-Driven Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelletier, C; Jung, J; Mosher, E
2016-06-15
Purpose: This study aims to develop a tool to rapidly delineate cardiac substructures for use in dosimetry for large-scale clinical trial or epidemiological investigations. The goal is to produce a system that can semi-automatically delineate nine cardiac structures to a reasonable accuracy within a couple of minutes. Methods: The cardiac contouring tool employs a Most Similar Atlas method, where a selection criterion is used to pre-select the most similar model to the patient from a library of pre-defined atlases. Sixty contrast-enhanced cardiac computed tomography angiography (CTA) scans (30 male and 30 female) were manually contoured to serve as the atlasmore » library. For each CTA 12 structures were delineated. Kabsch algorithm was used to compute the optimum rotation and translation matrices between the patient and atlas. Minimum root mean squared distance between the patient and atlas after transformation was used to select the most-similar atlas. An initial study using 10 CTA sets was performed to assess system feasibility. Leave-one patient out method was performed, and fit criteria were calculated to evaluate the fit accuracy compared to manual contours. Results: For the pilot study, mean dice indices of .895 were achieved for the whole heart, .867 for the ventricles, and .802 for the atria. In addition, mean distance was measured via the chord length distribution (CLD) between ground truth and the atlas structures for the four coronary arteries. The mean CLD for all coronary arteries was below 14mm, with the left circumflex artery showing the best agreement (7.08mm). Conclusion: The cardiac contouring tool is able to delineate cardiac structures with reasonable accuracy in less than 90 seconds. Pilot data indicates that the system is able to delineate the whole heart and ventricles within a reasonable accuracy using even a limited library. We are extending the atlas sets to 60 adult males and females in total.« less
Information system to manage anatomical knowledge and image data about brain
NASA Astrophysics Data System (ADS)
Barillot, Christian; Gibaud, Bernard; Montabord, E.; Garlatti, S.; Gauthier, N.; Kanellos, I.
1994-09-01
This paper reports about first results obtained in a project aiming at developing a computerized system to manage knowledge about brain anatomy. The emphasis is put on the design of a knowledge base which includes a symbolic model of cerebral anatomical structures (grey nuclei, cortical structures such as gyri and sulci, verntricles, vessels, etc.) and of hypermedia facilities allowing to retrieve and display information associated with the objects (texts, drawings, images). Atlas plates digitized from a stereotactic atlas are also used to provide natural and effective communication means between the user and the system.
Astrophysics experiments with radioactive beams at ATLAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Back, B. B.; Clark, J. A.; Pardo, R. C.
Reactions involving short-lived nuclei play an important role in nuclear astrophysics, especially in explosive scenarios which occur in novae, supernovae or X-ray bursts. This article describes the nuclear astrophysics program with radioactive ion beams at the ATLAS accelerator at Argonne National Laboratory. The CARIBU facility as well as recent improvements for the in-flight technique are discussed. New detectors which are important for studies of the rapid proton or the rapid neutron-capture processes are described. At the end we briefly mention plans for future upgrades to enhance the intensity, purity and the range of in-flight and CARIBU beams.
2017-03-17
The Orbital ATK Cygnus pressurized cargo module, enclosed in its payload fairing and secured on a KAMAG transporter, is transported from the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida to the Space Launch Complex 41 at Cape Canaveral Air Force Station, for mating to the United Launch Alliance (ULA) Atlas V rocket. The Orbital ATK CRS-7 commercial resupply services mission to the International Space Station is scheduled to launch atop the Atlas V from pad 41. Cygnus will deliver 7,600 pounds of supplies, equipment and scientific research materials to the space station.
GOES-S Atlas V Centaur Stage Transport to VIF
2018-02-08
The Centaur upper stage that will help launch NOAA's Geostationary Operational Environmental Satellite-S, or GOES-S, arrives at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Centaur will be mated to a United Launch Alliance Atlas V booster. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Centaur Stage OVI
2018-02-08
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a crane lifts a Centaur upper stage for mating to a United Launch Alliance Atlas V rocket that will boost NOAA's Geostationary Operational Environmental Satellite-S, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, solid rocket boosters (SRBs) have been mated to a United Launch Alliance Atlas V first stage. The SRBs will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First Stage Booster Lift to Vertical On Stand (LV
2018-01-31
A technician adjusts a crane that will lift a United Launch Alliance Atlas V first stage at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The rocket will be positioned on its launcher to boost the Geostationary Operational Environmental Satellite, or GOES-S. It will be the second in a series of four advanced geostationary weather satellites and will significantly improve the detection and observation of environmental phenomena that directly affect public safety. GOES-S is slated to launch March 1, 2018.
Astronaut Ellen Ochoa in small life raft during training
1994-06-28
S94-37520 (28 June 1994) --- Astronaut Ellen Ochoa, STS-66 payload commander, secures herself in a small life raft during an emergency bailout training exercise in the Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F). Making her second flight in space, Ochoa will join four other NASA astronauts and a European mission specialist for a week and a half in space aboard the Space Shuttle Atlantis in support of the Atmospheric Laboratory for Applications and Science (ATLAS-3) mission. Ochoa was a mission specialist on the ATLAS-2 mission in April of 1993.
2017-08-09
A crane is used to lift the payload fairing containing NASA's Tracking and Data Relay Satellite (TDRS-M) at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. TDRS-M will be stacked atop the United Launch Alliance Atlas V Centaur upper stage. TDRS-M will be the latest spacecraft destined for the agency's constellation of communications satellites that allows nearly continuous contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. Liftoff atop the ULA Atlas V rocket is scheduled for Aug. 18, 2017.
GOES-S Atlas V Centaur Stage OVI
2018-02-08
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, technicians and engineers monitor progress as a Centaur upper stage is mated to a United Launch Alliance Atlas V rocket that will boost NOAA's Geostationary Operational Environmental Satellite-S, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a solid rocket booster (SRB) is mated to a United Launch Alliance Atlas V first stage. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A solid rocket booster (SRB) is lifted for mating to a United Launch Alliance Atlas V first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A solid rocket booster (SRB) is prepared for mating to a United Launch Alliance Atlas V first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A crane lifts a solid rocket booster (SRB) for mating to a United Launch Alliance Atlas V first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A solid rocket booster (SRB) is mated to a United Launch Alliance Atlas V first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
Technicians and engineers prepare to mate a solid rocket booster (SRB) to a United Launch Alliance Atlas V first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a solid rocket booster (SRB) is prepared for mating to a United Launch Alliance Atlas V first stage. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a solid rocket booster (SRB) is lifted for mating to a United Launch Alliance Atlas V first stage. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a solid rocket booster (SRB) is mated to a United Launch Alliance Atlas V first stage. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, technicians support operations to mate a solid rocket booster (SRB) to a United Launch Alliance Atlas V first stage. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, technicians support operations to mate a solid rocket booster (SRB) to a United Launch Alliance Atlas V first stage. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
Putzer, David; Moctezuma, Jose Luis; Nogler, Michael
2017-11-01
An increasing number of orthopaedic surgeons are using computer aided planning tools for bone removal applications. The aim of the study was to consolidate a set of generic functions to be used for a 3D computer assisted planning or simulation. A limited subset of 30 surgical procedures was analyzed and verified in 243 surgical procedures of a surgical atlas. Fourteen generic functions to be used in 3D computer assisted planning and simulations were extracted. Our results showed that the average procedure comprises 14 ± 10 (SD) steps with ten different generic planning steps and four generic bone removal steps. In conclusion, the study shows that with a limited number of 14 planning functions it is possible to perform 243 surgical procedures out of Campbell's Operative Orthopedics atlas. The results may be used as a basis for versatile generic intraoperative planning software.
Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be; Department of Radiotherapy, Ghent University, Ghent; Wouters, Johan
Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. Thismore » procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.« less
ATLAS computing on Swiss Cloud SWITCHengines
NASA Astrophysics Data System (ADS)
Haug, S.; Sciacca, F. G.; ATLAS Collaboration
2017-10-01
Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.
interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and
Browsing Software of the Visible Korean Data Used for Teaching Sectional Anatomy
ERIC Educational Resources Information Center
Shin, Dong Sun; Chung, Min Suk; Park, Hyo Seok; Park, Jin Seo; Hwang, Sung Bae
2011-01-01
The interpretation of computed tomographs (CTs) and magnetic resonance images (MRIs) to diagnose clinical conditions requires basic knowledge of sectional anatomy. Sectional anatomy has traditionally been taught using sectioned cadavers, atlases, and/or computer software. The computer software commonly used for this subject is practical and…
A Computer-Based Atlas of Global Instrumental Climate Data (DB1003)
Bradley, Raymond S.; Ahern, Linda G.; Keimig, Frank T.
1994-01-01
Color-shaded and contoured images of global, gridded instrumental data have been produced as a computer-based atlas. Each image simultaneously depicts anomaly maps of surface temperature, sea-level pressure, 500-mbar geopotential heights, and percentages of reference-period precipitation. Monthly, seasonal, and annual composites are available in either cylindrical equidistant or northern and southern hemisphere polar projections. Temperature maps are available from 1854 to 1991, precipitation from 1851 to 1989, sea-level pressure from 1899 to 1991, and 500-mbar heights from 1946 to 1991. The source of data for the temperature images is Jones et al.'s global gridded temperature anomalies. The precipitation images were derived from Eischeid et al.'s global gridded precipitation percentages. Grids from the Data Support Section, National Center for Atmospheric Research (NCAR) were the sources for the sea-level-pressure and 500-mbar geopotential-height images. All images are in GIF files (1024 × 822 pixels, 256 colors) and can be displayed on many different computer platforms. Each annual subdirectory contains 141 images, each seasonal subdirectory contains 563 images, and each monthly subdirectory contains 1656 images. The entire atlas requires approximately 340 MB of disk space, but users may retrieve any number of images at one time.
Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James
2016-01-01
Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.
NASA Astrophysics Data System (ADS)
Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.
Subcortical structure segmentation using probabilistic atlas priors
NASA Astrophysics Data System (ADS)
Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido
2007-03-01
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.
Computing shifts to monitor ATLAS distributed computing infrastructure and operations
NASA Astrophysics Data System (ADS)
Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.
2017-10-01
The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.
Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT
NASA Astrophysics Data System (ADS)
Wynne, Ben; ATLAS Collaboration
2017-10-01
We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent execution of algorithms within an event. This has the potential to significantly reduce the memory footprint on future manycore devices. An additional benefit of the HLT implementation within AthenaMT is that it facilitates the integration of offline code into the HLT. The trigger must retain high rejection in the face of increasing numbers of pileup collisions. This will be achieved by greater use of offline algorithms that are designed to maximize the discrimination of signal from background. Therefore a unification of the HLT and offline reconstruction software environment is required. This has been achieved while at the same time retaining important HLT-specific optimisations that minimize the computation performed to reach a trigger decision. Such optimizations include early event rejection and reconstruction within restricted geometrical regions. We report on an HLT prototype in which the need for HLT-specific components has been reduced to a minimum. Promising results have been obtained with a prototype that includes the key elements of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger selections to this new framework and present the next steps towards a full implementation of the ATLAS trigger.
A Computer-Based Atlas of a Rat Dissection.
ERIC Educational Resources Information Center
Quentin-Baxter, Megan; Dewhurst, David
1990-01-01
A hypermedia computer program that uses text, graphics, sound, and animation with associative information linking techniques to teach the functional anatomy of a rat is described. The program includes a nonintimidating tutor, to which the student may turn. (KR)
Atlasmaker: A Grid-based Implementation of the Hyperatlas
NASA Astrophysics Data System (ADS)
Williams, R.; Djorgovski, S. G.; Feldmann, M. T.; Jacob, J.
2004-07-01
The Atlasmaker project is using Grid technology, in combination with NVO interoperability, to create new knowledge resources in astronomy. The product is a multi-faceted, multi-dimensional, scientifically trusted image atlas of the sky, made by federating many different surveys at different wavelengths, times, resolutions, polarizations, etc. The Atlasmaker software does resampling and mosaicking of image collections, and is well-suited to operate with the Hyperatlas standard. Requests can be satisfied via on-demand computations or by accessing a data cache. Computed data is stored in a distributed virtual file system, such as the Storage Resource Broker (SRB). We expect these atlases to be a new and powerful paradigm for knowledge extraction in astronomy, as well as a magnificent way to build educational resources. The system is being incorporated into the data analysis pipeline of the Palomar-Quest synoptic survey, and is being used to generate all-sky atlases from the 2MASS, SDSS, and DPOSS surveys for joint object detection.
NASA Astrophysics Data System (ADS)
Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan
2010-04-01
We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.
The ATLAS Eventlndex: data flow and inclusion of other metadata
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.
Ground breaking at Astrotech for a new facility
NASA Technical Reports Server (NTRS)
1999-01-01
Dirt flies during a ground-breaking ceremony to kick off Astrotech Space Operations' construction of a new satellite preparation facility to support the Delta IV, Boeing's winning entrant in the Air Force Evolved Expendable Launch Vehicle (EELV) Program. Wielding shovels are (from left to right) Tom Alexico; Chet Lee, chairman, Astrotech Space Operations; Gen. Forrest McCartney, vice president, Launch Operations, Lockheed Martin; Richard Murphy, director, Delta Launch Operations, The Boeing Company; Keith Wendt; Toby Voltz; Loren Shriver, deputy director, Launch & Payload Processing, Kennedy Space Center; Truman Scarborough, Brevard County commissioner; U.S. Representative 15th Congressional District David Weldon; Ron Swank; and watching the action at right is George Baker, president, Astrotech Space Operations. Astrotech is located in Titusville, Fla. It is a wholly owned subsidiary of SPACEHAB, Inc., and has been awarded a 10-year contract to provide payload processing services for The Boeing Company. The facility will enable Astrotech to support the full range of satellite sizes planned for launch aboard Delta II, III and IV launch vehicles, as well as the Atlas V, Lockheed Martin's entrant in the EELV Program. The Atlas V will be used to launch satellites for government, including NASA, and commercial customers.
2011-07-14
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, the trailer transporting the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission arrives at the RTG storage facility (RTGF). The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- At the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida, preparations are under way to offload the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission from the MMRTG trailer. The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- The multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is uncovered in the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG was returned to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission, secured inside the MMRTG trailer, makes its way between the Payload Hazardous Servicing Facility (PHSF) and the RTG storage facility. The MMRTG is being moved following a fit check on MSL's Curiosity rover in the PHSF. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2009-08-11
CAPE CANAVERAL, Fla. – At the Astrotech Space Operations facility in Titusville, Fla., workers in the control room monitor the data on computer screens from the movement of the high-gain antenna on the Solar Dynamics Observatory, or SDO. The SDO is undergoing performance testing. All of the spacecraft science instruments are being tested in their last major evaluation before launch. SDO is the first space weather research network mission in NASA's Living With a Star Program. The spacecraft's long-term measurements will give solar scientists in-depth information about changes in the sun's magnetic field and insight into how they affect Earth. In preparation for launch, engineers will perform a battery of comprehensive tests to ensure SDO can withstand the stresses and vibrations of the launch itself, as well as what it will encounter in the space environment after launch. Liftoff on an Atlas V rocket is scheduled for Dec. 4. Photo credit: NASA/Jack Pfaller
Off-line commissioning of EBIS and plans for its integration into ATLAS and CARIBU.
Ostroumov, P N; Barcikowski, A; Dickerson, C A; Mustapha, B; Perry, A; Sharamentov, S I; Vondrasek, R C; Zinkann, G
2016-02-01
An Electron Beam Ion Source Charge Breeder (EBIS-CB) has been developed at Argonne to breed radioactive beams from the CAlifornium Rare Isotope Breeder Upgrade (CARIBU) facility at Argonne Tandem Linac Accelerator System (ATLAS). The EBIS-CB will replace the existing ECR charge breeder to increase the intensity and significantly improve the purity of reaccelerated radioactive ion beams. The CARIBU EBIS-CB has been successfully commissioned offline with an external singly charged cesium ion source. The performance of the EBIS fully meets the specifications to breed rare isotope beams delivered from CARIBU. The EBIS is being relocated and integrated into ATLAS and CARIBU. A long electrostatic beam transport system including two 180° bends in the vertical plane has been designed. The commissioning of the EBIS and the beam transport system in their permanent location will start at the end of this year.
Off-line commissioning of EBIS and plans for its integration into ATLAS and CARIBU
NASA Astrophysics Data System (ADS)
Ostroumov, P. N.; Barcikowski, A.; Dickerson, C. A.; Mustapha, B.; Perry, A.; Sharamentov, S. I.; Vondrasek, R. C.; Zinkann, G.
2016-02-01
An Electron Beam Ion Source Charge Breeder (EBIS-CB) has been developed at Argonne to breed radioactive beams from the CAlifornium Rare Isotope Breeder Upgrade (CARIBU) facility at Argonne Tandem Linac Accelerator System (ATLAS). The EBIS-CB will replace the existing ECR charge breeder to increase the intensity and significantly improve the purity of reaccelerated radioactive ion beams. The CARIBU EBIS-CB has been successfully commissioned offline with an external singly charged cesium ion source. The performance of the EBIS fully meets the specifications to breed rare isotope beams delivered from CARIBU. The EBIS is being relocated and integrated into ATLAS and CARIBU. A long electrostatic beam transport system including two 180° bends in the vertical plane has been designed. The commissioning of the EBIS and the beam transport system in their permanent location will start at the end of this year.
TDRS-L spacecraft lift to mate on Atlas V
2014-01-13
CAPE CANAVERAL, Fla. – At Cape Canaveral Air Force Station's Vertical Integration Facility at Launch Complex 41, NASA's Tracking and Data Relay Satellite, or TDRS-L, spacecraft is lifted for mounting atop a United Launch Alliance Atlas V rocket. The TDRS-L satellite will be a part of the second of three next-generation spacecraft designed to ensure vital operational continuity for the NASA Space Network. It is scheduled to launch from Cape Canaveral's Space Launch Complex 41 atop a United Launch Alliance Atlas V rocket on Jan. 23, 2014. The current Tracking and Data Relay Satellite system consists of eight in-orbit satellites distributed to provide near continuous information relay contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. For more information, visit: http://www.nasa.gov/mission_pages/tdrs/home/index.html Photo credit: NASA/Dimitri Gerondidakis
TDRS-L spacecraft lift to mate on Atlas V
2014-01-13
CAPE CANAVERAL, Fla. – At Cape Canaveral Air Force Station's Vertical Integration Facility at Launch Complex 41, NASA's Tracking and Data Relay Satellite, or TDRS-L, spacecraft is moved into position for mating atop a United Launch Alliance Atlas V rocket. The TDRS-L satellite will be a part of the second of three next-generation spacecraft designed to ensure vital operational continuity for the NASA Space Network. It is scheduled to launch from Cape Canaveral's Space Launch Complex 41 atop a United Launch Alliance Atlas V rocket on Jan. 23, 2014. The current Tracking and Data Relay Satellite system consists of eight in-orbit satellites distributed to provide near continuous information relay contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. For more information, visit: http://www.nasa.gov/mission_pages/tdrs/home/index.html Photo credit: NASA/Dimitri Gerondidakis
TDRS-L spacecraft lift to mate on Atlas V
2014-01-13
CAPE CANAVERAL, Fla. – At Cape Canaveral Air Force Station's Vertical Integration Facility at Launch Complex 41, NASA's Tracking and Data Relay Satellite, or TDRS-L, spacecraft has been mated atop a United Launch Alliance Atlas V rocket. The TDRS-L satellite will be a part of the second of three next-generation spacecraft designed to ensure vital operational continuity for the NASA Space Network. It is scheduled to launch from Cape Canaveral's Space Launch Complex 41 atop a United Launch Alliance Atlas V rocket on Jan. 23, 2014. The current Tracking and Data Relay Satellite system consists of eight in-orbit satellites distributed to provide near continuous information relay contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. For more information, visit: http://www.nasa.gov/mission_pages/tdrs/home/index.html Photo credit: NASA/Dimitri Gerondidakis
A Kalman Filtering Perspective for Multiatlas Segmentation*
Gao, Yi; Zhu, Liangjia; Cates, Joshua; MacLeod, Rob S.; Bouix, Sylvain; Tannenbaum, Allen
2016-01-01
In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity—neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy. PMID:26807162
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondrasek, R.; Levand, A.; Pardo, R.
2012-02-15
The Californium Rare Ion Breeder Upgrade (CARIBU) of the Argonne National Laboratory ATLAS facility will provide low-energy and reaccelerated neutron-rich radioactive beams for the nuclear physics program. A 70 mCi {sup 252}Cf source produces fission fragments which are thermalized and collected by a helium gas catcher into a low-energy particle beam with a charge of 1+ or 2+. An electron cyclotron resonance (ECR) ion source functions as a charge breeder in order to raise the ion charge sufficiently for acceleration in the ATLAS linac. The final CARIBU configuration will utilize a 1 Ci {sup 252}Cf source to produce radioactive beamsmore » with intensities up to 10{sup 6} ions/s for use in the ATLAS facility. The ECR charge breeder has been tested with stable beam injection and has achieved charge breeding efficiencies of 3.6% for {sup 23}Na{sup 8+}, 15.6% for {sup 84}Kr{sup 17+}, and 13.7% for {sup 85}Rb{sup 19+} with typical breeding times of 10 ms/charge state. For the first radioactive beams, a charge breeding efficiency of 11.7% has been achieved for {sup 143}Cs{sup 27+} and 14.7% for {sup 143}Ba{sup 27+}. The project has been commissioned with a radioactive beam of {sup 143}Ba{sup 27+} accelerated to 6.1 MeV/u. In order to take advantage of its lower residual contamination, an EBIS charge breeder will replace the ECR charge breeder in the next two years. The advantages and disadvantages of the two techniques are compared taking into account the requirements of the next generation radioactive beam facilities.« less
The Center for Computational Biology: resources, achievements, and challenges
Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2011-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221
The Center for Computational Biology: resources, achievements, and challenges.
Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2012-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.
Department of Defense Atlas/Data Abstract for the United States and Selected Areas. Fiscal Year 1992
1993-01-01
PUBLICATION IS DIOR/L-03-92, TABLE OF CONTENTS Page INTRODUCTION ........................................................... 2 MAPS AND STATISTICAL...Mexico (NH) 32 80 New York (NY) 33 82 North Carolina (NC) 34 84 North Dakota (ND) 35 86 Ohio (OH) 36 88 TABLE OF CONTENTS (Continued) State/Area Map...ZIMMERMIANN BASIL CORP JV 34,679 Operation/Ammunition Facilities 34,679 3. MORRISON KNUDSEN CORP 30,923 Facilities Operations Support Services 30,923 4
White matter atlas of the human spinal cord with estimation of partial volume effect.
Lévy, S; Benhamou, M; Naaman, C; Rainville, P; Callot, V; Cohen-Adad, J
2015-10-01
Template-based analysis has proven to be an efficient, objective and reproducible way of extracting relevant information from multi-parametric MRI data. Using common atlases, it is possible to quantify MRI metrics within specific regions without the need for manual segmentation. This method is therefore free from user-bias and amenable to group studies. While template-based analysis is common procedure for the brain, there is currently no atlas of the white matter (WM) spinal pathways. The goals of this study were: (i) to create an atlas of the white matter tracts compatible with the MNI-Poly-AMU template and (ii) to propose methods to quantify metrics within the atlas that account for partial volume effect. The WM atlas was generated by: (i) digitalizing an existing WM atlas from a well-known source (Gray's Anatomy), (ii) registering this atlas to the MNI-Poly-AMU template at the corresponding slice (C4 vertebral level), (iii) propagating the atlas throughout all slices of the template (C1 to T6) using regularized diffeomorphic transformations and (iv) computing partial volume values for each voxel and each tract. Several approaches were implemented and validated to quantify metrics within the atlas, including weighted-average and Gaussian mixture models. Proof-of-concept application was done in five subjects for quantifying magnetization transfer ratio (MTR) in each tract of the atlas. The resulting WM atlas showed consistent topological organization and smooth transitions along the rostro-caudal axis. The median MTR across tracts was 26.2. Significant differences were detected across tracts, vertebral levels and subjects, but not across laterality (right-left). Among the different tested approaches to extract metrics, the maximum a posteriori showed highest performance with respect to noise, inter-tract variability, tract size and partial volume effect. This new WM atlas of the human spinal cord overcomes the biases associated with manual delineation and partial volume effect. Combined with multi-parametric data, the atlas can be applied to study demyelination and degeneration in diseases such as multiple sclerosis and will facilitate the conduction of longitudinal and multi-center studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro
2016-08-15
Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less
Yamahata, Hitoshi; Hirano, Hirofumi; Yamaguchi, Satoshi; Mori, Masanao; Niiro, Tadaaki; Tokimura, Hiroshi; Arita, Kazunori
2017-09-15
The spinal canal diameter (SCD) is one of the most studied factors for the assessment of cervical spinal canal stenosis. The inner anteroposterior diameter (IAP), the SCD, and the cross-sectional area (CSA) of the atlas have been used for the evaluation of the size of the atlas in patients with atlas hypoplasia, a rare form of developmental spinal canal stenosis, however, there is little information on their relationship. The aim of this study was to identify the most useful parameter for depicting the size of the atlas. The CSA, the IAP, and the SCD were measured on computed tomography (CT) images at the C1 level of 213 patients and compared in this retrospective study. These three parameters increased with increasing patient height and weight. There was a strong correlation between IAP and SCD (r = 0.853) or CSA (r = 0.822), while correlation between SCD and CSA (r = 0.695) was weaker than between IAP and CSA. Partial correlation analysis showed that IAP was positively correlated with SCD (r = 0.687) and CSA (r = 0.612) when CSA or SCD were controlled. SCD was negatively correlated with CSA when IAP was controlled (r = -0.21). The IAP can serve as the CSA for the evaluation of the size of the atlas ring, while the SCD does not correlate with the CSA. As the patient height and weight affect the size of the atlas, analysis of the spinal canal at the C1 level should take into account physiologic patient data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, A; Farrell, T; Diamond, K
2014-08-15
Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less
2005-09-29
KENNEDY SPACE CENTER, FLA. - On the Shuttle Landing Facility at NASA Kennedy Space Center, the Atlas V fairing halves for the New Horizons spacecraft have been offloaded from the Russian cargo plane (background). The fairing halves will be transported to Astrotech Space Operations in Titusville. The fairing later will be placed around the New Horizons spacecraft in the Payload Hazardous Service Facility. A fairing protects a spacecraft during launch and flight through the atmosphere. Once in space, it is jettisoned. The Lockheed Martin Atlas V is the launch vehicle for the New Horizons spacecraft, which is designed to make the first reconnaissance of Pluto and Charon - a "double planet" and the last planet in our solar system to be visited by spacecraft. The mission will then visit one or more objects in the Kuiper Belt region beyond Neptune. New Horizons is scheduled to launch in January 2006, swing past Jupiter for a gravity boost and scientific studies in February or March 2007, and reach Pluto and its moon, Charon, in July 2015.
2005-09-29
KENNEDY SPACE CENTER, FLA. - On the Shuttle Landing Facility at NASA Kennedy Space Center, one of the Atlas V fairing halves for the New Horizons spacecraft is offloaded from the Russian cargo plane. The fairing halves will be transported to Astrotech Space Operations in Titusville. The fairing later will be placed around the New Horizons spacecraft in the Payload Hazardous Service Facility. A fairing protects a spacecraft during launch and flight through the atmosphere. Once in space, it is jettisoned. The Lockheed Martin Atlas V is the launch vehicle for the New Horizons spacecraft, which is designed to make the first reconnaissance of Pluto and Charon - a "double planet" and the last planet in our solar system to be visited by spacecraft. The mission will then visit one or more objects in the Kuiper Belt region beyond Neptune. New Horizons is scheduled to launch in January 2006, swing past Jupiter for a gravity boost and scientific studies in February or March 2007, and reach Pluto and its moon, Charon, in July 2015.
NASA Astrophysics Data System (ADS)
Forney, Anne M.; Walters, W. B.; Sethi, J.; Chiara, C. J.; Harker, J.; Janssens, R. V. F.; Zhu, S.; Carpenter, M.; Alcorta, M.; Gürdal, G.; Hoffman, C. R.; Kay, B. P.; Kondev, F. G.; Lauristen, T.; Lister, C. J.; McCutchan, E. A.; Rogers, A. M.; Seweryniak, D.
2017-01-01
Owing to the importance of the structure of 76Ge in interpreting double β decay studies, the structures of adjacent nuclei have been of considerable interest. Recently reported features for the structures of 72,74,76Ge indicate both shape coexistence and triaxiality. New data for the excited states of 78Ge will be reported arising from Gammasphere studies of multinucleon transfer reactions between a 76Ge beam and thick heavy targets at the ATLAS facility at Argonne National Laboratory. The previously known yrast band is extended to higher spins, candidate levels for a triaxial sequence have been observed, and the associated staggering determined. The staggering in 78Ge found in this work is not in agreement with theoretical work. Candidates for negative-parity states and seniority-four states will be discussed. This material is based upon work supported by the U.S. DOE under DE-AC02-06CH11357 and DE-FG02-94ER40834. Resources of ANL's ATLAS setup, a DOE Office of Science user facility, were used.
2005-09-29
KENNEDY SPACE CENTER, FLA. - A Russian cargo plane sits on the Shuttle Landing Facility at NASA Kennedy Space Center with the Atlas V fairing for the New Horizons spacecraft inside. The two fairing halves will be removed, loaded onto trucks and transported to Astrotech Space Operations in Titusville. The fairing later will be placed around the New Horizons spacecraft in the Payload Hazardous Service Facility. A fairing protects a spacecraft during launch and flight through the atmosphere. Once in space, it is jettisoned. The Lockheed Martin Atlas V is the launch vehicle for the New Horizons spacecraft, which is designed to make the first reconnaissance of Pluto and Charon - a "double planet" and the last planet in our solar system to be visited by spacecraft. The mission will then visit one or more objects in the Kuiper Belt region beyond Neptune. New Horizons is scheduled to launch in January 2006, swing past Jupiter for a gravity boost and scientific studies in February or March 2007, and reach Pluto and its moon, Charon, in July 2015.
Digital hand atlas for web-based bone age assessment: system design and implementation
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente
2000-04-01
A frequently used assessment method of skeletal age is atlas matching by a radiological examination of a hand image against a small set of Greulich-Pyle patterns of normal standards. The method however can lead to significant deviation in age assessment, due to a variety of observers with different levels of training. The Greulich-Pyle atlas based on middle upper class white populations in the 1950s, is also not fully applicable for children of today, especially regarding the standard development in other racial groups. In this paper, we present our system design and initial implementation of a digital hand atlas and computer-aided diagnostic (CAD) system for Web-based bone age assessment. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. The system consists of a hand atlas database, a CAD module and a Java-based Web user interface. The atlas database is based on a large set of clinically normal hand images of diverse ethnic groups. The Java-based Web user interface allows users to interact with the hand image database form browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, is then extracted and compared with patterns from the atlas database to assess the bone age.
2009-04-27
CAPE CANAVERAL, Fla. –– The Atlas V first stage arrives at the Vertical Integration Facility on Cape Canaveral Air Force Station's Launch Complex 41. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– On Cape Canaveral Air Force Station's Launch Complex 41, the Atlas V first stage is being moved into the Vertical Integration Facility. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– On Cape Canaveral Air Force Station's Launch Complex 41, the Atlas V first stage is being moved into the Vertical Integration Facility. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– On Cape Canaveral Air Force Station's Launch Complex 41, the Atlas V first stage is being moved into the Vertical Integration Facility. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
2009-04-27
CAPE CANAVERAL, Fla. –– The Atlas V first stage arrives at the Vertical Integration Facility on Cape Canaveral Air Force Station's Launch Complex 41. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
Technicians and engineers offload a solid rocket booster (SRB) that just arrived at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be mated to a United Launch Alliance Atlas V first stage to help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A solid rocket booster (SRB) is offloaded from a transport vehicle at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be mated to a United Launch Alliance Atlas V first stage to help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Centaur Stage Transport to VIF
2018-02-08
The Centaur upper stage that will help launch NOAA's Geostationary Operational Environmental Satellite-S, or GOES-S, departs the Delta Operations Center for the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The Centaur then will be mated to a United Launch Alliance Atlas V booster. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
Technicians and engineers assist as a crane lifts a solid rocket booster (SRB) for mating to a United Launch Alliance Atlas V first stage in the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
2018-02-28
A United Launch Alliance Atlas V rocket exits the Vertical Integration Facility on its way to the launch pad at Space Launch Complex 41 at Cape Canaveral Air Force Station. The launch vehicle will send the National Oceanic and Atmospheric Administration's, or NOAA's, Geostationary Operational Environmental Satellite, or GOES-S, into orbit. The GOES series is designed to significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to lift off at 5:02 p.m. EST on March 1, 2018 aboard a United Launch Alliance Atlas V rocket.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A technician prepares to offload a solid rocket booster (SRB) that just arrived at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be mated to a United Launch Alliance Atlas V first stage to help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
Technicians prepare to offload a solid rocket booster (SRB) that just arrived at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be mated to a United Launch Alliance Atlas V first stage to help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a solid rocket booster (SRB) is lifted by a crane for mating to a United Launch Alliance Atlas V first stage. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
A technician monitors activity as a solid rocket booster (SRB) is prepared for mating to a United Launch Alliance Atlas V first stage At the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V Last SRB Lift to Booster
2018-02-07
In the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida, a solid rocket booster (SRB) is lifted by a crane for mating to a United Launch Alliance Atlas V first stage. The SRB will help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
GOES-S Atlas V First SRB Mate to Booster
2018-02-01
A transport vehicle carrying a solid rocket booster (SRB) arrives at the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. The SRB will be mated to a United Launch Alliance Atlas V first stage to help boost NOAA's Geostationary Operational Environmental Satellite, or GOES-S, to orbit. GOES-S is the second in a series of four advanced geostationary weather satellites that will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and the nation's economic health and prosperity. GOES-S is slated to launch March 1, 2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myerson, Robert J.; Garofalo, Michael C.; El Naqa, Issam
2009-07-01
Purpose: To develop a Radiation Therapy Oncology Group (RTOG) atlas of the elective clinical target volume (CTV) definitions to be used for planning pelvic intensity-modulated radiotherapy (IMRT) for anal and rectal cancers. Methods and Materials: The Gastrointestinal Committee of the RTOG established a task group (the nine physician co-authors) to develop this atlas. They responded to a questionnaire concerning three elective CTVs (CTVA: internal iliac, presacral, and perirectal nodal regions for both anal and rectal case planning; CTVB: external iliac nodal region for anal case planning and for selected rectal cases; CTVC: inguinal nodal region for anal case planning andmore » for select rectal cases), and to outline these areas on individual computed tomographic images. The imaging files were shared via the Advanced Technology Consortium. A program developed by one of the co-authors (I.E.N.) used binomial maximum-likelihood estimates to generate a 95% group consensus contour. The computer-estimated consensus contours were then reviewed by the group and modified to provide a final contouring consensus atlas. Results: The panel achieved consensus CTV definitions to be used as guidelines for the adjuvant therapy of rectal cancer and definitive therapy for anal cancer. The most important difference from similar atlases for gynecologic or genitourinary cancer is mesorectal coverage. Detailed target volume contouring guidelines and images are discussed. Conclusion: This report serves as a template for the definition of the elective CTVs to be used in IMRT planning for anal and rectal cancers, as part of prospective RTOG trials.« less
Diagnostic workstation for digital hand atlas in bone age assessment
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente; Ominsky, Steven
1998-06-01
Bone age assessment by a radiological examination of a hand and wrist image is a procedure frequently performed in pediatric patients to evaluate growth disorders, determine growth potential in children and monitor therapy effects. The assessment method currently used in radiological diagnosis is based on atlas matching of the diagnosed hand image with the reference set of atlas patterns, which was developed in 1950s and is not fully applicable for children of today. We intent to implement a diagnostic workstation for creating a new reference set of clinically normal images which will serve as a digital atlas and can be used for a computer-assisted bone age assessment. In this paper, we present the initial data- collection and system setup phase of this five-year research program. We describe the system design, user interface implementation and software tool development for collection, visualization, management and processing of clinically normal hand and wrist images.
ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.
NASA Astrophysics Data System (ADS)
Bianchi, R. M.; Boudreau, J.; Konstantinidis, N.; Martyniuk, A. C.; Moyse, E.; Thomas, J.; Waugh, B. M.; Yallup, D. P.; ATLAS Collaboration
2017-10-01
At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.
The ATLAS Simulation Infrastructure
Aad, G.; Abbott, B.; Abdallah, J.; ...
2010-09-25
The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.
1984-01-01
The machine-readable version of the Atlas as it is currently being distributed from the Astronomical Data Center is described. The data were obtained with the Oke multichannel scanner on the 5-meter Hale reflector for purposes of synthesizing galaxy spectra, and the digitized Atlas contains normalized spectral energy distributions, computed colors, scan line and continuum indices for 175 selected stars covering the complete ranges of spectral type and luminosity class. The documentation includes a byte-by-byte format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.
Nowinski, Wieslaw L; Thaung, Thant Shoon Let; Chua, Beng Choon; Yi, Su Hnin Wut; Ngai, Vincent; Yang, Yili; Chrzan, Robert; Urbanik, Andrzej
2015-05-15
Although the adult human skull is a complex and multifunctional structure, its 3D, complete, realistic, and stereotactic atlas has not yet been created. This work addresses the construction of a 3D interactive atlas of the adult human skull spatially correlated with the brain, cranial nerves, and intracranial vasculature. The process of atlas construction included computed tomography (CT) high-resolution scan acquisition, skull extraction, skull parcellation, 3D disarticulated bone surface modeling, 3D model simplification, brain-skull registration, 3D surface editing, 3D surface naming and color-coding, integration of the CT-derived 3D bony models with the existing brain atlas, and validation. The virtual skull model created is complete with all 29 bones, including the auditory ossicles (being among the smallest bones). It contains all typical bony features and landmarks. The created skull model is superior to the existing skull models in terms of completeness, realism, and integration with the brain along with blood vessels and cranial nerves. This skull atlas is valuable for medical students and residents to easily get familiarized with the skull and surrounding anatomy with a few clicks. The atlas is also useful for educators to prepare teaching materials. It may potentially serve as a reference aid in the reading and operating rooms. Copyright © 2015 Elsevier B.V. All rights reserved.
Discovery through maps: Exploring real-world applications of ...
Background/Question/Methods U.S. EPA’s EnviroAtlas provides a collection of interactive tools and resources for exploring ecosystem goods and services. The purpose of EnviroAtlas is to provide better access to consistently derived ecosystems and socio-economic data to facilitate decision-making while also providing data for research and education. EnviroAtlas tools and resources are well-suited for educational use, as they encourage systems thinking, cover a broad range of topics, are freely available, and do not require specialized software to use. To use EnviroAtlas only requires a computer and an internet connection, making it a useful tool for community planning, education, and decision-making at multiple scales. To help users understand how EnviroAtlas resources may be used in different contexts, we provide example use cases. These use cases highlight a real-world issue which EnviroAtlas data, in conjunction with other available data or resources, may be used to address. Here we present three use cases that approach incorporating ecosystem services in decision-making in different decision contexts: 1) to minimize the negative impacts of excessive summer heat due to urbanization in Portland, Oregon 2) to explore selecting a pilot route for a community greenway, and 3) to reduce nutrient loading through a regional manure transport program. Results/Conclusions EnviroAtlas use cases provide step-by-step approaches for using maps and data to address real-wo
Dynamic updating atlas for heart segmentation with a nonlinear field-based model.
Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng
2017-09-01
Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peressutti, D; Schipaanboord, B; Kadir, T
Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Boxmore » (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of representative atlases showed favourable performance. This research was funded via InnovateUK Grant 600277 as part of Eurostars Grant E!9297. DP,BS,MG,TK are employees of Mirada Medical Ltd.« less
Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren
2015-12-01
To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.
Improving vertebra segmentation through joint vertebra-rib atlases
NASA Astrophysics Data System (ADS)
Wang, Yinong; Yao, Jianhua; Roth, Holger R.; Burns, Joseph E.; Summers, Ronald M.
2016-03-01
Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 +/- 3.1% to 93.8 +/- 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 +/- 0.60mm and 8.63 +/- 4.44mm to 0.30 +/- 0.27mm and 3.65 +/- 2.87mm, respectively.
An atlas of the prenatal mouse brain: gestational day 14.
Schambra, U B; Silver, J; Lauder, J M
1991-11-01
A prenatal atlas of the mouse brain is presently unavailable and is needed for studies of normal and abnormal development, using techniques including immunocytochemistry and in situ hybridization. This atlas will be especially useful for researchers studying transgenic and mutant mice. This collection of photomicrographs and corresponding drawings of Gestational Day (GD) 14 mouse brain sections is an excerpt from a larger atlas encompassing GD 12-18. In composing this atlas, available published studies on the developing rodent brain were consulted to aid in the detailed labeling of embryonic brain structures. C57Bl/6J mice were mated for 1 h, and the presence of a copulation plug was designated as GD 0. GD 14 embryos were perfused transcardially with 4% paraformaldehyde in 0.1 M phosphate buffer and embedded in paraffin. Serial sections (10 microns thickness) were cut through whole heads in sagittal and horizontal planes. They were stained with hematoxylin and eosin and photographed. Magnifications were 43X and 31X for the horizontal and sagittal sections, respectively. Photographs were traced and line drawings prepared using an Adobe Illustrator on a Macintosh computer.
Benardini, James N; La Duc, Myron T; Ballou, David; Koukol, Robert
2014-01-01
On November 26, 2011, the Mars Science Laboratory (MSL) launched from Florida's Cape Canaveral Air Force Station aboard an Atlas V 541 rocket, taking its first step toward exploring the past habitability of Mars' Gale Crater. Because microbial contamination could profoundly impact the integrity of the mission, and compliance with international treaty was a necessity, planetary protection measures were implemented on all MSL hardware to verify that bioburden levels complied with NASA regulations. The cleanliness of the Atlas V payload fairing (PLF) and associated ground support systems used to launch MSL were also evaluated. By applying proper recontamination countermeasures early and often in the encapsulation process, the PLF was kept extremely clean and was shown to pose little threat of recontaminating the enclosed MSL flight system upon launch. Contrary to prelaunch estimates that assumed that the interior PLF spore burden ranged from 500 to 1000 spores/m², the interior surfaces of the Atlas V PLF were extremely clean, housing a mere 4.65 spores/m². Reported here are the practices and results of the campaign to implement and verify planetary protection measures on the Atlas V launch vehicle and associated ground support systems used to launch MSL. All these facilities and systems were very well kept and exceeded the levels of cleanliness and rigor required in launching the MSL payload.
Aerospace Test Facilities at NASA LeRC Plumbrook
NASA Technical Reports Server (NTRS)
1992-01-01
An overview of the facilities and research being conducted at LeRC's Plumbrook field station is given. The video highlights four main structures and explains their uses. The Space Power Facility is the world's largest space environment simulation chamber, where spacebound hardware is tested in simulations of the vacuum and extreme heat and cold of the space plasma environment. This facility was used to prepare Atlas 1 rockets to ferry CRRES into orbit; it will also be used to test space nuclear electric power generation systems. The Spacecraft Propulsion Research Facility allows rocket vehicles to be hot fired in a simulated space environment. In the Cryogenic Propellant Tank Facility, researchers are developing technology for storing and transferring liquid hydrogen in space. There is also a Hypersonic Wind Tunnel which can perform flow tests with winds up to Mach 7.
Aerospace test facilities at NASA LERC Plumbrook
NASA Astrophysics Data System (ADS)
1992-10-01
An overview of the facilities and research being conducted at LeRC's Plumbrook field station is given. The video highlights four main structures and explains their uses. The Space Power Facility is the worlds largest space environment simulation chamber, where spacebound hardware is tested in simulations of the vacuum and extreme heat and cold of the space plasma environment. This facility was used to prepare Atlas 1 rockets to ferry CRRES into orbit; it will also be used to test space nuclear electric power generation systems. The Spacecraft Propulsion Research Facility allows rocket vehicles to be hot fired in a simulated space environment. In the Cryogenic Propellant Tank Facility, researchers are developing technology for storing and transferring liquid hydrogen in space. There is also a Hypersonic Wind Tunnel which can perform flow tests with winds up to Mach 7.
Khan, Arshad M.; Perez, Jose G.; Wells, Claire E.; Fuentes, Olac
2018-01-01
The rat has arguably the most widely studied brain among all animals, with numerous reference atlases for rat brain having been published since 1946. For example, many neuroscientists have used the atlases of Paxinos and Watson (PW, first published in 1982) or Swanson (S, first published in 1992) as guides to probe or map specific rat brain structures and their connections. Despite nearly three decades of contemporaneous publication, no independent attempt has been made to establish a basic framework that allows data mapped in PW to be placed in register with S, or vice versa. Such data migration would allow scientists to accurately contextualize neuroanatomical data mapped exclusively in only one atlas with data mapped in the other. Here, we provide a tool that allows levels from any of the seven published editions of atlases comprising three distinct PW reference spaces to be aligned to atlas levels from any of the four published editions representing S reference space. This alignment is based on registration of the anteroposterior stereotaxic coordinate (z) measured from the skull landmark, Bregma (β). Atlas level alignments performed along the z axis using one-dimensional Cleveland dot plots were in general agreement with alignments obtained independently using a custom-made computer vision application that utilized the scale-invariant feature transform (SIFT) and Random Sample Consensus (RANSAC) operation to compare regions of interest in photomicrographs of Nissl-stained tissue sections from the PW and S reference spaces. We show that z-aligned point source data (unpublished hypothalamic microinjection sites) can be migrated from PW to S space to a first-order approximation in the mediolateral and dorsoventral dimensions using anisotropic scaling of the vector-formatted atlas templates, together with expert-guided relocation of obvious outliers in the migrated datasets. The migrated data can be contextualized with other datasets mapped in S space, including neuronal cell bodies, axons, and chemoarchitecture; to generate data-constrained hypotheses difficult to formulate otherwise. The alignment strategies provided in this study constitute a basic starting point for first-order, user-guided data migration between PW and S reference spaces along three dimensions that is potentially extensible to other spatial reference systems for the rat brain. PMID:29765309
Development of a computerized atlas of neonatal surgery
NASA Astrophysics Data System (ADS)
Gill, Brijesh S.; Hardin, William D., Jr.
1995-05-01
Digital imaging is an evolving technology with significant potential for enhancing medical education and practice. Current teaching methodologies still rely on the time-honored traditions of group lectures, small group discussions, and clinical preceptorships. Educational content and value are variable. Utilization of electronic media is in its infancy but offers significant potential for enhancing if not replacing current teaching methodologies. This report details our experience with the creation of an interactive atlas on neonatal surgical conditions. The photographic atlas has been one of the classic tools of practice, reference, and especially of education in surgery. The major limitations on current atlases all stem from the fact that they are produced in book form. The limiting factors in the inclusion of large numbers of images in these volumes include the desire to limit the physical size of the book and the costs associated with high quality color reproduction of print images. The structure of the atlases usually makes them reference tools, rather than teaching tools. We have digitized a large number of clinical images dealing with the diagnosis and surgical management of all of the most common neonatal surgical conditions. The flexibility of the computer presentation environment allows the images to be organized in a number of different ways. In addition to a standard captioned atlas, the user may choose to review case histories of several of the more common conditions in neonates, complete with presenting conditions, imaging studies, surgery and pathology. Use of the computer offers the ability to choose multiple views of the images, including comparison views and transparent overlays that point out important anatomical and histopathological structures, and the ability to perform user self-tests. This atlas thus takes advantage of several aspects of data management unique to computerized digital imaging, particularly the ability to combine all aspects of medical imaging related to a single case for easy retrieval. This facet unique to digital imaging makes it the obvious choice for new methods of teaching such complex subjects as the clinical management of neonatal surgical conditions. We anticipate that many more subjects in the surgical, pathologic, and radiologic realms will eventually be presented in a similar manner.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
Ballanger, Bénédicte; Tremblay, Léon; Sgambato-Faure, Véronique; Beaudoin-Gobert, Maude; Lavenne, Franck; Le Bars, Didier; Costes, Nicolas
2013-08-15
MRI templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate PET studies. Segmenting brain images via multiple atlases outperforms single-atlas labelling in humans. We present a set of atlases manually delineated on brain MRI scans of the monkey Macaca fascicularis. We use this multi-atlas dataset to evaluate two automated methods in terms of accuracy, robustness and reliability in segmenting brain structures on MRI and extracting regional PET measures. Twelve individual Macaca fascicularis high-resolution 3DT1 MR images were acquired. Four individual atlases were created by manually drawing 42 anatomical structures, including cortical and sub-cortical structures, white matter regions, and ventricles. To create the MRI template, we first chose one MRI to define a reference space, and then performed a two-step iterative procedure: affine registration of individual MRIs to the reference MRI, followed by averaging of the twelve resampled MRIs. Automated segmentation in native space was obtained in two ways: 1) Maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space, and transformation back into the individual native space (MAXPROB)(.) 2) One to four individual atlases were registered directly to the individual native space, and combined by decision fusion (PROPAG). Accuracy was evaluated by computing the Dice similarity index and the volume difference. The robustness and reproducibility of PET regional measurements obtained via automated segmentation was evaluated on four co-registered MRI/PET datasets, which included test-retest data. Dice indices were always over 0.7 and reached maximal values of 0.9 for PROPAG with all four individual atlases. There was no significant mean volume bias. The standard deviation of the bias decreased significantly when increasing the number of individual atlases. MAXPROB performed better when increasing the number of atlases used. When all four atlases were used for the MAXPROB creation, the accuracy of morphometric segmentation approached that of the PROPAG method. PET measures extracted either via automatic methods or via the manually defined regions were strongly correlated, with no significant regional differences between methods. Intra-class correlation coefficients for test-retest data were over 0.87. Compared to single atlas extractions, multi-atlas methods improve the accuracy of region definition. They also perform comparably to manually defined regions for PET quantification. Multiple atlases of Macaca fascicularis brains are now available and allow reproducible and simplified analyses. Copyright © 2013 Elsevier Inc. All rights reserved.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Mary, E-mail: maryfeng@umich.ed; Moran, Jean M.; Koelling, Todd
2011-01-01
Purpose: Cardiac toxicity is an important sequela of breast radiotherapy. However, the relationship between dose to cardiac structures and subsequent toxicity has not been well defined, partially due to variations in substructure delineation, which can lead to inconsistent dose reporting and the failure to detect potential correlations. Here we have developed a heart atlas and evaluated its effect on contour accuracy and concordance. Methods and Materials: A detailed cardiac computed tomography scan atlas was developed jointly by cardiology, cardiac radiology, and radiation oncology. Seven radiation oncologists were recruited to delineate the whole heart, left main and left anterior descending interventricularmore » branches, and right coronary arteries on four cases before and after studying the atlas. Contour accuracy was assessed by percent overlap with gold standard atlas volumes. The concordance index was also calculated. Standard radiation fields were applied. Doses to observer-contoured cardiac structures were calculated and compared with gold standard contour doses. Pre- and post-atlas values were analyzed using a paired t test. Results: The cardiac atlas significantly improved contour accuracy and concordance. Percent overlap and concordance index of observer-contoured cardiac and gold standard volumes were 2.3-fold improved for all structures (p < 0.002). After application of the atlas, reported mean doses to the whole heart, left main artery, left anterior descending interventricular branch, and right coronary artery were within 0.1, 0.9, 2.6, and 0.6 Gy, respectively, of gold standard doses. Conclusions: This validated University of Michigan cardiac atlas may serve as a useful tool in future studies assessing cardiac toxicity and in clinical trials which include dose volume constraints to the heart.« less
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Nonlocal atlas-guided multi-channel forest learning for human brain labeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Guangkai; Gao, Yaozong; Wu, Guorong
Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features canmore » be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI-LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. Conclusions: The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature.« less
Nonlocal atlas-guided multi-channel forest learning for human brain labeling
Ma, Guangkai; Gao, Yaozong; Wu, Guorong; Wu, Ligang; Shen, Dinggang
2016-01-01
Purpose: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). Methods: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. Results: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. Conclusions: The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature. PMID:26843260
How to keep the Grid full and working with ATLAS production and physics jobs
NASA Astrophysics Data System (ADS)
Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration
2017-10-01
The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
AXONOMETRIC, LAUNCH DOOR AND DOOR CYLINDER, LAUNCH PLATFORM ROLLER GUIDE, ...
AXONOMETRIC, LAUNCH DOOR AND DOOR CYLINDER, LAUNCH PLATFORM ROLLER GUIDE, CRIB SUSPENSION SHOCK STRUT, LAUNCH PLATFORM - Dyess Air Force Base, Atlas F Missle Site S-8, Launch Facility, Approximately 3 miles east of Winters, 500 feet southwest of Highway 1770, center of complex, Winters, Runnels County, TX
CAVEman: Standardized anatomical context for biomedical data mapping.
Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Wat, Stephen; Hallgrímsson, Benedikt; Dong, Xiaoli; Shu, Xueling; Stromer, Julie N; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W
2008-01-01
The authors have created a software system called the CAVEman, for the visual integration and exploration of heterogeneous anatomical and biomedical data. The CAVEman can be applied for both education and research tasks. The main component of the system is a three-dimensional digital atlas of the adult male human anatomy, structured according to the nomenclature of Terminologia Anatomica. The underlying data-indexing mechanism uses standard ontologies to map a range of biomedical data types onto the atlas. The CAVEman system is now used to visualize genetic processes in the context of the human anatomy and to facilitate visual exploration of the data. Through the use of Javatrade mark software, the atlas-based system is portable to virtually any computer environment, including personal computers and workstations. Existing Java tools for biomedical data analysis have been incorporated into the system. The affordability of virtual-reality installations has increased dramatically over the last several years. This creates new opportunities for educational scenarios that model important processes in a patient's body, including gene expression patterns, metabolic activity, the effects of interventions such as drug treatments, and eventually surgical simulations.
Li, Lin; Cazzell, Mary; Babawale, Olajide; Liu, Hanli
2016-10-01
Atlas-guided diffuse optical tomography (atlas-DOT) is a computational means to image changes in cortical hemodynamic signals during human brain activities. Graph theory analysis (GTA) is a network analysis tool commonly used in functional neuroimaging to study brain networks. Atlas-DOT has not been analyzed with GTA to derive large-scale brain connectivity/networks based on near-infrared spectroscopy (NIRS) measurements. We introduced an automated voxel classification (AVC) method that facilitated the use of GTA with atlas-DOT images by grouping unequal-sized finite element voxels into anatomically meaningful regions of interest within the human brain. The overall approach included volume segmentation, AVC, and cross-correlation. To demonstrate the usefulness of AVC, we applied reproducibility analysis to resting-state functional connectivity measurements conducted from 15 young adults in a two-week period. We also quantified and compared changes in several brain network metrics between young and older adults, which were in agreement with those reported by a previous positron emission tomography study. Overall, this study demonstrated that AVC is a useful means for facilitating integration or combination of atlas-DOT with GTA and thus for quantifying NIRS-based, voxel-wise resting-state functional brain networks.
[The brain in stereotaxic coordinates (a textbook for colleges)].
Budantsev, A Iu; Kisliuk, O S; Shul'govskiĭ, V V; Rykunov, D S; Iarkov, A V
1993-01-01
The present textbook is directed forward students of universities and medical colleges, young scientists and practicing doctors dealing with stereotaxic method. The Paxinos and Watson stereotaxic rat brain atlas (1982) is the basis of the textbook. The atlas has been transformed into computer educational program and seven laboratory works: insertion of the electrode into brain, microelectrophoresis, microinjection of drugs into brain, electrolytic destruction in the brain structures, local brain superfusion. The laboratory works are compiled so that they allow not only to study practical use of the stereotaxic method but to model simple problems involving stereotaxic surgery in the deep structures of brain. The textbook is intended for carrying by IBM PC/AT computers. The volume of the textbook is 1.7 Mbytes.
NASA Astrophysics Data System (ADS)
Kiyan, D.; Jones, A. G.; Fullea, J.; Hogg, C.; Ledo, J.; Sinischalchi, A.; Campanya, J.; Picasso Phase II Team
2010-12-01
The Atlas System of Morocco is an intra-continental mountain belt extending for more than 2,000 km along the NW African plate with a predominant NE-SW trend. The System comprises three main branches: the High Atlas, the Middle Atlas, and the Anti Atlas. We present the results of a very recent multi-institutional magnetotelluric (MT) experiment across the Atlas Mountains region that started in September, 2009 and ended in February, 2010, comprising acquisition of broadband and long-period MT data. The experiment consisted of two profiles: (1) a N-S oriented profile crossing the Middle Atlas through the Central High Atlas to the east and (2) a NE-SW profile crossing the western High Atlas towards the Anti Atlas to the west. The MT measurements are part of the PICASSO (Program to Investigate Convective Alboran Sea System Overturn) and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROCORES TOPO-EUROPE project) projects, to develop a better understanding of the internal structure and evolution of the crust and lithosphere of the Atlas Mountains. The MT data have been processed with robust remote reference methods and submitted to comprehensive strike and dimensionality analysis. Two clearly depth-differentiated strike directions are apparent for crustal (5-35 km) and lithospheric (50-150 km) depth ranges. These two orientations are roughly consistent with the NW-SE Africa-Eurasia convergence acting since the late Cretaceous, and the NNE-SSW Middle Atlas, where Miocene to recent Alkaline volcanism is present. Two-dimensional (2-D) smooth electrical resistivity models were computed independently for both 50 degrees and 20 degrees E of N strike directions. At the crustal scale, our preliminary results reveal a middle to lower-crustal conductive layer stretching from the Middle Atlas southward towards the High Moulouya basin. The most resistive (and therefore potentially thickest) lithosphere is found beneath the Central High Atlas. The inversion results are to be tested against other geophysical observables (i.e. topography, geoid and gravity anomalies, surface heat flow and seismic velocities) using the software package LitMod. This software combines petrological and geophysical modelling of the lithosphere and sub-lithospheric upper mantle within an internally consistent thermodynamic-geophysical framework, where all relevant properties are functions of temperature, pressure and composition.
He, Baorong; Yan, Liang; Zhao, Qinpeng; Chang, Zhen; Hao, Dingjun
2014-12-01
Most atlas fractures can be effectively treated nonoperatively with external immobilization unless there is an injury to the transverse atlantal ligament. Surgical stabilization is most commonly achieved using a posterior approach with fixation of C1-C2 or C0-C2, but these treatments usually result in loss of the normal motion of the C1-C2 and C0-C1 joints. To clinically validate feasibility, safety, and value of open reduction and fixation using an atlas polyaxial lateral mass screw-plate construct in unstable atlas fractures. Retrospective review of patients who sustained unstable atlas fractures treated with polyaxial lateral mass screw-plate construct. Twenty-two patients with unstable atlas fractures who underwent posterior atlas polyaxial lateral mass screw-plate fixation were analyzed. Visual analog scale, neurologic status, and radiographs for fusion. From January 2011 to September 2012, 22 patients with unstable atlas fractures were treated with this technique. Patients' charts and radiographs were reviewed. Bone fusion, internal fixation placement, and integrity of spinal cord and vertebral arteries were assessed via intraoperative and follow-up imaging. Neurologic function, range of motion, and pain levels were assessed clinically on follow-up. All patients were followed up from 12 to 32 months, with an average of 22.5±18.0 months. A total of 22 plates were placed, and all 44 screws were inserted into the atlas lateral masses. The mean duration of the procedure was 86 minutes, and the average estimated blood loss was 120 mL. Computed tomography scans 9 months after surgery confirmed that fusion was achieved in all cases. There was no screw or plate loosening or breakage in any patient. All patients had well-preserved range of motion. No vascular or neurologic complication was noted, and all patients had a good clinical outcome. An open reduction and posterior internal fixation with atlas polyaxial lateral mass screw-plate is a safe and effective surgical option in the treatment of unstable atlas fractures. This technique can provide immediate reduction and preserve C1-C2 motion. Copyright © 2014 Elsevier Inc. All rights reserved.
The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster
NASA Astrophysics Data System (ADS)
Löwe, P.; Klump, J.; Thaler, J.
2012-04-01
Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times lasting up to full 20 CPU days. The deployment of GRASS GIS on a compute cluster allows our users to tackle GIS tasks previously out of reach of single workstations. In addition, this GRASS GIS cluster implementation will be made available to other users at GFZ in the course of 2012. It will thus become a research utility in the sense of "Software as a Service" (SaaS) and can be seen as our first step towards building a GFZ corporate cloud service.
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
2009-04-27
CAPE CANAVERAL, Fla. –– When the Atlas V first stage is raised to vertical, it will be lifted into the Vertical Integration Facility on Cape Canaveral Air Force Station's Launch Complex 41. The Atlas V/Centaur is the launch vehicle for the Lunar Reconnaissance Orbiter, or LRO. The orbiter will carry seven instruments to provide scientists with detailed maps of the lunar surface and enhance our understanding of the moon's topography, lighting conditions, mineralogical composition and natural resources. Information gleaned from LRO will be used to select safe landing sites, determine locations for future lunar outposts and help mitigate radiation dangers to astronauts. Launch of LRO is targeted no earlier than June 2. Photo credit: NASA/Kim Shiflett
TDRS-M Departure from Astrotech and Transport to VIF Pad 41
2017-08-09
Enclosed in its payload fairing, NASA's Tracking and Data Relay Satellite (TDRS-M) is transported from Astrotech Space Operations Facilityin Titusville Florida to the Vertical Integration Facility at Space Launch Complex 41 at Cape Canaveral Air Force Station. TDRS-M will be stacked atop the United Launch Alliance Atlas V Centaur upper stage. It will be the latest spacecraft destined for the agency's constellation of communications satellites that allows nearly continuous contact with orbiting spacecraft ranging from the International Space Station and Hubble Space Telescope to the array of scientific observatories. Liftoff atop the ULA Atlas V rocket is scheduled to take place from Cape Canaveral's Space Launch Complex 41 on Aug. 18, 2017.
Multi-Axis Space Inertia Test Facility inside the Altitude Wind Tunnel
1960-04-21
The Multi-Axis Space Test Inertial Facility (MASTIF) in the Altitude Wind Tunnel at the National Aeronautics and Space Administration (NASA) Lewis Research Center. Although the Mercury astronaut training and mission planning were handled by the Space Task Group at Langley Research Center, NASA Lewis played an important role in the program, beginning with the Big Joe launch. Big Joe was a singular attempt early in the program to use a full-scale Atlas booster and simulate the reentry of a mockup Mercury capsule without actually placing it in orbit. A unique three-axis gimbal rig was built inside Lewis’ Altitude Wind Tunnel to test Big Joe’s attitude controls. The control system was vital since the capsule would burn up on reentry if it were not positioned correctly. The mission was intended to assess the performance of the Atlas booster, the reliability of the capsule’s attitude control system and beryllium heat shield, and the capsule recovery process. The September 9, 1959 launch was a success for the control system and heatshield. Only a problem with the Atlas booster kept the mission from being a perfect success. The MASTIF was modified in late 1959 to train Project Mercury pilots to bring a spinning spacecraft under control. An astronaut was secured in a foam couch in the center of the rig. The rig then spun on three axes from 2 to 50 rotations per minute. Small nitrogen gas thrusters were used by the astronauts to bring the MASTIF under control.
2011-07-14
CAPE CANAVERAL, Fla. -- The multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is position behind mobile plexiglass radiation shields in the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG was returned to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The generator will remain in the RTGF until is moved to the pad for integration on the rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-12
CAPE CANAVERAL, Fla. -- Workers dressed in clean room attire, known as bunny suits, transfer the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission on its holding base from the airlock of the Payload Hazardous Servicing Facility (PHSF) into the facility's high bay. In the high bay, the MMRTG temporarily will be installed on the MSL rover, Curiosity, for a fit check but will be installed on the rover for launch at the pad. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is planned for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Cory Huston
Object-color-signal prediction using wraparound Gaussian metamers.
Mirzaei, Hamidreza; Funt, Brian
2014-07-01
Alexander Logvinenko introduced an object-color atlas based on idealized reflectances called rectangular metamers in 2009. For a given color signal, the atlas specifies a unique reflectance that is metameric to it under the given illuminant. The atlas is complete and illuminant invariant, but not possible to implement in practice. He later introduced a parametric representation of the object-color atlas based on smoother "wraparound Gaussian" functions. In this paper, these wraparound Gaussians are used in predicting illuminant-induced color signal changes. The method proposed in this paper is based on computationally "relighting" that reflectance to determine what its color signal would be under any other illuminant. Since that reflectance is in the metamer set the prediction is also physically realizable, which cannot be guaranteed for predictions obtained via von Kries scaling. Testing on Munsell spectra and a multispectral image shows that the proposed method outperforms the predictions of both those based on von Kries scaling and those based on the Bradford transform.
Boeing CST-100 Heat Shield Testing
2017-05-31
A heat shield is used during separation test activities with Boeing's Starliner structural test article. The test article is undergoing rigorous qualification testing at the company's Huntington Beach Facility in California. Boeing’s CST-100 Starliner will launch on the Atlas V rocket to the International Space Station as part of NASA’s Commercial Crew Program.
2016-08-22
An Air Force C-5 Galaxy transport plane approaches the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida to deliver the GOES-R spacecraft for launch processing. The GOES series are weather satellites operated by NOAA to enhance forecasts. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-23
The GOES-R spacecraft is secured on its work stand inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Uncrating and Move to Vertical
2016-08-23
The GOES-R spacecraft stands vertically inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
Alternative Fuels Data Center: Georgia Transportation Data for Alternative
Atlas from the National Renewable Energy Laboratory Case Studies Video thumbnail for Municipality with a Fleet Commits to Alternative Fuels for the Long Haul Jan. 27, 2017 Video thumbnail for Workplace Facilities Charges Up Tenants and Property Managers Jan. 1, 2015 Video thumbnail for DeKalb County Turns
System-of-Systems Technology-Portfolio-Analysis Tool
NASA Technical Reports Server (NTRS)
O'Neil, Daniel; Mankins, John; Feingold, Harvey; Johnson, Wayne
2012-01-01
Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers.
2013-08-09
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, technicians prepare a thermal blanket for installation on the MAVEN spacecraft's parabolic high gain antenna. MAVEN stands for Mars Atmosphere and Volatile Evolution. The antenna will communicate vast amounts of data to Earth during the mission. MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
2013-08-09
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, technicians install a thermal blanket on the parabolic high gain antenna of the Mars Atmosphere and Volatile Evolution, or MAVEN spacecraft. The antenna will communicate vast amounts of data to Earth during the mission. MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
2013-08-09
CAPE CANAVERAL, Fla. – Inside the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, technicians apply tape to the thermal blanket for the MAVEN spacecraft's parabolic high gain antenna. MAVEN stands for Mars Atmosphere and Volatile Evolution. The antenna will communicate vast amounts of data to Earth during the mission. MAVEN is being prepared inside the facility for its scheduled November launch aboard a United Launch Alliance Atlas V rocket to Mars. Positioned in an orbit above the Red Planet, MAVEN will study the upper atmosphere of Mars in unprecedented detail. Photo credit: NASA/Jim Grossmann
The EPTN consensus-based atlas for CT- and MR-based contouring in neuro-oncology.
Eekers, Daniëlle Bp; In 't Ven, Lieke; Roelofs, Erik; Postma, Alida; Alapetite, Claire; Burnet, Neil G; Calugaru, Valentin; Compter, Inge; Coremans, Ida E M; Høyer, Morton; Lambrecht, Maarten; Nyström, Petra Witt; Romero, Alejandra Méndez; Paulsen, Frank; Perpar, Ana; de Ruysscher, Dirk; Renard, Laurette; Timmermann, Beate; Vitek, Pavel; Weber, Damien C; van der Weide, Hiske L; Whitfield, Gillian A; Wiggenraad, Ruud; Troost, Esther G C
2018-03-13
To create a digital, online atlas for organs at risk (OAR) delineation in neuro-oncology based on high-quality computed tomography (CT) and magnetic resonance (MR) imaging. CT and 3 Tesla (3T) MR images (slice thickness 1 mm with intravenous contrast agent) were obtained from the same patient and subsequently fused. In addition, a 7T MR without intravenous contrast agent was obtained from a healthy volunteer. Based on discussion between experienced radiation oncologists, the clinically relevant organs at risk (OARs) to be included in the atlas for neuro-oncology were determined, excluding typical head and neck OARs previously published. The draft atlas was delineated by a senior radiation oncologist, 2 residents in radiation oncology, and a senior neuro-radiologist incorporating relevant available literature. The proposed atlas was then critically reviewed and discussed by European radiation oncologists until consensus was reached. The online atlas includes one CT-scan at two different window settings and one MR scan (3T) showing the OARs in axial, coronal and sagittal view. This manuscript presents the three-dimensional descriptions of the fifteen consensus OARs for neuro-oncology. Among these is a new OAR relevant for neuro-cognition, the posterior cerebellum (illustrated on 7T MR images). In order to decrease inter- and intra-observer variability in delineating OARs relevant for neuro-oncology and thus derive consistent dosimetric data, we propose this atlas to be used in photon and particle therapy. The atlas is available online at www.cancerdata.org and will be updated whenever required. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Lunar Orbiter II - Photographic Mission Summary
NASA Technical Reports Server (NTRS)
1967-01-01
Lunar Orbiter II photography of landing sites, and spacecraft systems performance. The second of five Lunar Orbiter spacecraft was successfully launched from Launch Complex 13 at the Air Force Eastern Test Range by an Atlas-Agena launch vehicle at 23:21 GMT on November 6, 1966. Tracking data from the Cape Kennedy and Grand Bahama tracking stations were used to control and guide the launch vehicle during Atlas powered flight. The Agena spacecraft combination was maneuvered into a 100-nautical-mile-altitude Earth orbit by the preset on-board Agena computer. In addition, the Agena computer determined the maneuver 1 and engine-bum period required to inject the spacecraft on the cislunar trajectory 20 minutes after launch. Tracking data from the downrange stations and the Johannesburg, South Africa station were used to monitor the entire boost trajectory.
Ionization ratios and elemental abundances in the atmosphere of 68 Tauri
NASA Astrophysics Data System (ADS)
Aouina, A.; Monier, R.
2017-12-01
We have derived the ionization ratios of twelve elements in the atmosphere of the star 68 Tauri (HD 27962) using an ATLAS9 model atmosphere with 72 layers computed for the effective temperature and surface gravity of the star. We then computed a grid of synthetic spectra generated by SYNSPEC49 based on an ATLAS9 model atmosphere in order to model one high resolution spectrum secured by one of us (RM) with the échelle spectrograph SOPHIE at Observatoire de Haute Provence. We could determine the abundances of several elements in their dominant ionization stage, including those defining the Am phenomenon. We thus provide new abundance determinations for 68 Tauri using updated accurate atomic data retrieved from the NIST database which extend previous abundance works.
Probabilistic atlas based labeling of the cerebral vessel tree
NASA Astrophysics Data System (ADS)
Van de Giessen, Martijn; Janssen, Jasper P.; Brouwer, Patrick A.; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2015-03-01
Preoperative imaging of the cerebral vessel tree is essential for planning therapy on intracranial stenoses and aneurysms. Usually, a magnetic resonance angiography (MRA) or computed tomography angiography (CTA) is acquired from which the cerebral vessel tree is segmented. Accurate analysis is helped by the labeling of the cerebral vessels, but labeling is non-trivial due to anatomical topological variability and missing branches due to acquisition issues. In recent literature, labeling the cerebral vasculature around the Circle of Willis has mainly been approached as a graph-based problem. The most successful method, however, requires the definition of all possible permutations of missing vessels, which limits application to subsets of the tree and ignores spatial information about the vessel locations. This research aims to perform labeling using probabilistic atlases that model spatial vessel and label likelihoods. A cerebral vessel tree is aligned to a probabilistic atlas and subsequently each vessel is labeled by computing the maximum label likelihood per segment from label-specific atlases. The proposed method was validated on 25 segmented cerebral vessel trees. Labeling accuracies were close to 100% for large vessels, but dropped to 50-60% for small vessels that were only present in less than 50% of the set. With this work we showed that using solely spatial information of the vessel labels, vessel segments from stable vessels (>50% presence) were reliably classified. This spatial information will form the basis for a future labeling strategy with a very loose topological model.
Developing an educational curriculum for EnviroAtlas ...
EnviroAtlas is a web-based tool developed by the EPA and its partners, which provides interactive tools and resources for users to explore the benefits that people receive from nature, often referred to as ecosystem goods and services.Ecosystem goods and services are important to human health and well-being. Using EnviroAtlas, users can access, view, and analyze diverse information to better understand the potential impacts of decisions. EnviroAtlas provides two primary tools, the Interactive Map and the Eco-Health Relationship Browser. EnviroAtlas integrates geospatial data from a variety of sources so that users can visualize the impacts of decision-making on ecosystems. The Interactive Map allows users to investigate various ecosystem elements (i.e. land cover, pollution, and community development) and compare them across localities in the United States. The best part of the Interactive Map is that it does not require specialized software for map application; rather, it requires only a computer and an internet connection. As such, it can be used as a powerful educational tool. The Eco-Health Relationship Browser is also a web-based, highly interactive tool that uses existing scientific literature to visually demonstrate the connections between the environment and human health.As an ASPPH/EPA Fellow with a background in environmental science and secondary science education, I am currently developing an educational curriculum to support the EnviroAtlas to
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
An automatic approach for 3D registration of CT scans
NASA Astrophysics Data System (ADS)
Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas
2012-03-01
CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.
New experimental results in atlas-based brain morphometry
NASA Astrophysics Data System (ADS)
Gee, James C.; Fabella, Brian A.; Fernandes, Siddharth E.; Turetsky, Bruce I.; Gur, Ruben C.; Gur, Raquel E.
1999-05-01
In a previous meeting, we described a computational approach to MRI morphometry, in which a spatial warp mapping a reference or atlas image into anatomic alignment with the subject is first inferred. Shape differences with respect to the atlas are then studied by calculating the pointwise Jacobian determinant for the warp, which provides a measure of the change in differential volume about a point in the reference as it transforms to its corresponding position in the subject. In this paper, the method is used to analyze sex differences in the shape and size of the corpus callosum in an ongoing study of a large population of normal controls. The preliminary results of the current analysis support findings in the literature that have observed the splenium to be larger in females than in males.
Data mining for average images in a digital hand atlas
NASA Astrophysics Data System (ADS)
Zhang, Aifeng; Cao, Fei; Pietka, Ewa; Liu, Brent J.; Huang, H. K.
2004-04-01
Bone age assessment is a procedure performed in pediatric patients to quickly evaluate parameters of maturation and growth from a left hand and wrist radiograph. Pietka and Cao have developed a Computer-aided diagnosis (CAD) method of bone age assessment based on a digital hand atlas. The aim of this paper is to extend their work by automatically select the best representative image from a group of normal children based on specific bony features that reflect skeletal maturity. The group can be of any ethnic origin and gender from one year to 18 year old in the digital atlas. This best representative image is defined as the "average" image of the group that can be augmented to Piekta and Cao's method to facilitate in the bone age assessment process.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
2011-11-17
CAPE CANAVERAL, Fla. -- In the Vertical Integration Facility at Space Launch Complex-41 on Cape Canaveral Air Force Station, spacecraft technicians install the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission on the Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 25. For more information, visit http://www.nasa.gov/msl. Photo credit: Department of Energy/Idaho National Laboratory
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian
2014-01-01
We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this “Atlas-T1w-DUTE” approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the “silver standard”; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally. PMID:24753982
Evolution of the ATLAS Nightly Build System
NASA Astrophysics Data System (ADS)
Undrus, A.
2012-12-01
The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.
Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning.
Xu, Zhoubing; Burke, Ryan P; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A
2015-08-01
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining. Copyright © 2015 Elsevier B.V. All rights reserved.
Is Greulich and Pyle atlas still a good reference for bone age assessment?
NASA Astrophysics Data System (ADS)
Zhang, Aifeng; Tsao, Sinchai; Sayre, James W.; Gertych, Arkadiusz; Liu, Brent J.; Huang, H. K.
2007-03-01
The most commonly used method for bone age assessment in clinical practice is the book atlas matching method developed by Greulich and Pyle in the 1950s. Due to changes in both population diversity and nutrition in the United States, this atlas may no longer be a good reference. An updated data set becomes crucial to improve the bone age assessment process. Therefore, a digital hand atlas was built with 1,100 children hand images, along with patient information and radiologists' readings, of normal Caucasian (CAU), African American (BLK), Hispanic (HIS), and Asian (ASI) males (M) and females (F) with ages ranging from 0 - 18 years. This data was collected from Childrens' Hospital Los Angeles. A computer-aided-diagnosis (CAD) method has been developed based on features extracted from phalangeal regions of interest (ROIs) and carpal bone ROIs from this digital hand atlas. Using the data collected along with the Greulich and Pyle Atlas-based readings and CAD results, this paper addresses this question: "Do different ethnicities and gender have different bone growth patterns?" To help with data analysis, a novel web-based visualization tool was developed to demonstrate bone growth diversity amongst differing gender and ethnic groups using data collected from the Digital Atlas. The application effectively demonstrates a discrepancy of bone growth pattern amongst different populations based on race and gender. It also has the capability of helping a radiologist determine the normality of skeletal development of a particular patient by visualizing his or her chronological age, radiologist reading, and CAD assessed bone age relative to the accuracy of the P&G method.
NASA Astrophysics Data System (ADS)
Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.
2014-03-01
Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.
Although the MYC oncogene has been implicated in cancer, a systematic assessment of alterations of MYC, related transcription factors, and co-regulatory proteins, forming the proximal MYC network (PMN), across human cancers is lacking. Using computational approaches, we define genomic and proteomic features associated with MYC and the PMN across the 33 cancers of The Cancer Genome Atlas. Pan-cancer, 28% of all samples had at least one of the MYC paralogs amplified.
GOES-R Uncrating and Move to Vertical
2016-08-23
Team members remove a protective plastic covering from the GOES-R spacecraft inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Uncrating and Move to Vertical
2016-08-23
The shipping container is lifted off the GOES-R spacecraft inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-23
An overhead crane moves the GOES-R spacecraft toward its work stand inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Uncrating and Move to Vertical
2016-08-23
The GOES-R spacecraft is revealed following its uncrating inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-10-21
The two halves of the payload fairing are fully closed around the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A.; Hambro, L.; Hooper, K.
2008-07-01
This paper will present the history of the Atlas 36 and Titan 40 Space Launch Complexes (SLC), the facility assessment process, demolition planning, recycle methodology, and actual facility demolition that resulted in a 40% reduction in baseline cost. These two SLC launched hundreds of payloads into space from Cape Canaveral Air Force Station (AFS), Florida. The Atlas-Centaur family of rockets could lift small- to medium-size satellites designed for communications, weather, or military use, placing them with near pinpoint accuracy into their intended orbits. The larger Titan family was relied upon for heavier lifting needs, including launching military satellites as wellmore » as interplanetary probes. But despite their efficiency and cost-effectiveness, the Titan rockets, as well as earlier generation Atlas models, were retired in 2005. Concerns about potential environmental health hazards from PCBs and lead-based paint chipping off the facilities also contributed to the Air Force's decision in 2005 to dismantle and demolish the Atlas and Titan missile-launching systems. Lockheed Martin secured the complex following the final launch, removed equipment and turned over the site to the Air Force for decommissioning and demolition (D and D). AMEC was retained by the Air Force to perform demolition planning and facility D and D in 2004. AMEC began with a review of historical information, interviews with past operations personnel, and 100% facility assessment of over 100 structures. There where numerous support buildings that due to their age contained asbestos containing material (ACM), PCB-impacted material, and universal material that had to be identified and removed prior to demolition. Environmental testing had revealed that the 36B mobile support tower (MST) exceeded the TSCA standard for polychlorinated biphenyls (PCB) paint (<50 ppm), as did the high bay sections of the Titan Vertical Integration Building (VIB). Thus, while most of the steel structures could be completely recycled, about one-third of 36B MST and the affected areas of the VIB were to be consigned to an on-site regulated waste landfill. In all, it is estimated that approximately 10,000,000 kg (11,000 tons) of PCB-coated steel will be land-filled and 23,000,000 kg (25,000 tons) will be recycled. The recycling of the steel and other materials made it possible to do additional demolition by using these funds. Therefore, finding ways to maximize the recycle value of materials became a key factor in the pre-demolition characterization and implementation strategy. This paper will present the following: - Critical elements in demolition planning working at an active launch facility; - Characterization and strategy to maximize steel recycle; - Waste disposition strategy to maximize recycle/reuse and minimize disposal; - Recycle options available at DOD installations that allow for addition funds for demolition; - Innovation in demolition methodologies for large structures - explosive demolition and large-scale dismantlement; - H and S aspects of explosive demolition and large scale dismantlement. In conclusion: The Cape Canaveral AFS Demolition Program has been a great success due to the integration of multiple operations and contractors working together to determine the most cost-effective demolition methods. It is estimated that by extensive pre-planning and working with CCAFS representatives, as well as maximizing the recycle credits of various material, primarily steel, that the government will be able to complete what was base-lined to be a $30 M demolition program for < $20 M. Other factors included a competitive subcontractor environment where they were encouraged with incentives to maximize recycle/reuse of material and creative demolition solutions. Also, by overlapping multiple demolition tasks at multiple facilities allowed for a reduction in field oversight. (authors)« less
NASA Astrophysics Data System (ADS)
Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.
2017-03-01
The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.
Hangout with CERN: a direct conversation with the public
NASA Astrophysics Data System (ADS)
Rao, Achintya; Goldfarb, Steven; Kahle, Kate
2016-04-01
Hangout with CERN refers to a weekly, half-hour-long, topical webcast hosted at CERN. The aim of the programme is threefold: (i) to provide a virtual tour of various locations and facilities at CERN, (ii) to discuss the latest scientific results from the laboratory, and, most importantly, (iii) to engage in conversation with the public and answer their questions. For each ;episode;, scientists gather around webcam-enabled computers at CERN and partner institutes/universities, connecting to one another using the Google+ social network's ;Hangouts; tool. The show is structured as a conversation mediated by a host, usually a scientist, and viewers can ask questions to the experts in real time through a Twitter hashtag or YouTube comments. The history of Hangout with CERN can be traced back to ICHEP 2012, where several physicists crowded in front of a laptop connected to Google+, using a ;Hangout On Air; webcast to explain to the world the importance of the discovery of the Higgs-like boson, announced just two days before at the same conference. Hangout with CERN has also drawn inspiration from two existing outreach endeavours: (i) ATLAS Virtual Visits, which connected remote visitors with scientists in the ATLAS Control Room via video conference, and (ii) the Large Hangout Collider, in which CMS scientists gave underground tours via Hangouts to groups of schools and members of the public around the world. In this paper, we discuss the role of Hangout with CERN as a bi-directional outreach medium and an opportunity to train scientists in effective communication.
JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.
Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard
2005-03-09
Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.
An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts
Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S.; Mangin, Jean-Francois; Seong, Joon-Kyung
2015-01-01
We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively. PMID:26225419
An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts.
Yoo, Sang Wook; Guevara, Pamela; Jeong, Yong; Yoo, Kwangsun; Shin, Joseph S; Mangin, Jean-Francois; Seong, Joon-Kyung
2015-01-01
We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively.
Data federation strategies for ATLAS using XRootD
NASA Astrophysics Data System (ADS)
Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration
2014-06-01
In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.
FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment
NASA Astrophysics Data System (ADS)
Loewe, P.; Klump, J.; Thaler, J.
2012-12-01
High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.
ATLAS DBM Module Qualification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soha, Aria; Gorisek, Andrej; Zavrtanik, Marko
2014-06-18
This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of Jozef Stefan Institute, CERN, and University of Toronto who have committed to participate in beam tests to be carried out during the 2014 Fermilab Test Beam Facility program. Chemical Vapour Deposition (CVD) diamond has a number of properties that make it attractive for high energy physics detector applications. Its large band-gap (5.5 eV) and large displacement energy (42 eV/atom) make it a material that is inherently radiation tolerant with very low leakage currents and high thermal conductivity. CVD diamond is beingmore » investigated by the RD42 Collaboration for use very close to LHC interaction regions, where the most extreme radiation conditions are found. This document builds on that work and proposes a highly spatially segmented diamond-based luminosity monitor to complement the time-segmented ATLAS Beam Conditions Monitor (BCM) so that, when Minimum Bias Trigger Scintillators (MTBS) and LUCID (LUminosity measurement using a Cherenkov Integrating Detector) have difficulty functioning, the ATLAS luminosity measurement is not compromised.« less
A digital rat atlas of sectional anatomy
NASA Astrophysics Data System (ADS)
Yu, Li; Liu, Qian; Bai, Xueling; Liao, Yinping; Luo, Qingming; Gong, Hui
2006-09-01
This paper describes a digital rat alias of sectional anatomy made by milling. Two healthy Sprague-Dawley (SD) rat weighing 160-180 g were used for the generation of this atlas. The rats were depilated completely, then euthanized by Co II. One was via vascular perfusion, the other was directly frozen at -85 °C over 24 hour. After that, the frozen specimens were transferred into iron molds for embedding. A 3% gelatin solution colored blue was used to fill the molds and then frozen at -85 °C for one or two days. The frozen specimen-blocks were subsequently sectioned on the cryosection-milling machine in a plane oriented approximately transverse to the long axis of the body. The surface of specimen-blocks were imaged by a scanner and digitalized into 4,600 x2,580 x 24 bit array through a computer. Finally 9,475 sectional images (arterial vessel were not perfused) and 1,646 sectional images (arterial vessel were perfused) were captured, which made the volume of the digital atlas up to 369.35 Gbyte. This digital rat atlas is aimed at the whole rat and the rat arterial vessels are also presented. We have reconstructed this atlas. The information from the two-dimensional (2-D) images of serial sections and three-dimensional (3-D) surface model all shows that the digital rat atlas we constructed is high quality. This work lays the foundation for a deeper study of digital rat.
Monitoring Geothermal Features in Yellowstone National Park with ATLAS Multispectral Imagery
NASA Technical Reports Server (NTRS)
Spruce, Joseph; Berglund, Judith
2000-01-01
The National Park Service (NPS) must produce an Environmental Impact Statement for each proposed development in the vicinity of known geothermal resource areas (KGRAs) in Yellowstone National Park. In addition, the NPS monitors indicator KGRAs for environmental quality and is still in the process of mapping many geothermal areas. The NPS currently maps geothermal features with field survey techniques. High resolution aerial multispectral remote sensing in the visible, NIR, SWIR, and thermal spectral regions could enable YNP geothermal features to be mapped more quickly and in greater detail In response, Yellowstone Ecosystems Studies, in partnership with NASA's Commercial Remote Sensing Program, is conducting a study on the use of Airborne Terrestrial Applications Sensor (ATLAS) multispectral data for monitoring geothermal features in the Upper Geyser Basin. ATLAS data were acquired at 2.5 meter resolution on August 17, 2000. These data were processed into land cover classifications and relative temperature maps. For sufficiently large features, the ATLAS data can map geothermal areas in terms of geyser pools and hot springs, plus multiple categories of geothermal runoff that are apparently indicative of temperature gradients and microbial matting communities. In addition, the ATLAS maps clearly identify geyserite areas. The thermal bands contributed to classification success and to the computation of relative temperature. With masking techniques, one can assess the influence of geothermal features on the Firehole River. Preliminary results appear to confirm ATLAS data utility for mapping and monitoring geothermal features. Future work will include classification refinement and additional validation.
Fast Simulation of Electromagnetic Showers in the ATLAS Calorimeter: Frozen Showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barberio, E.; /Melbourne U.; Boudreau, J.
2011-11-29
One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the event simulation several parametrisation methods are available in ATLAS. In this paper we present a short description of a frozen shower technique, together with some recent benchmarks and comparison with full simulation. An expected high rate of proton-proton collisions in ATLAS detector at LHC requires large samples of simulated events (Monte Carlo) to study various physics processes. A detailed simulation of particle reactions ('full simulation') in the ATLAS detectormore » is based on GEANT4 and is very accurate. However, due to complexity of the detector, high particle multiplicity and GEANT4 itself, the average CPU time spend to simulate typical QCD event in pp collision is 20 or more minutes for modern computers. During detector simulation the largest time is spend in the calorimeters (up to 70%) most of which is required for electromagnetic particles in the electromagnetic (EM) part of the calorimeters. This is the motivation for fast simulation approaches which reduce the simulation time without affecting the accuracy. Several of fast simulation methods available within the ATLAS simulation framework (standard Athena based simulation program) are discussed here with the focus on the novel frozen shower library (FS) technique. The results obtained with FS are presented here as well.« less
Benchmarking the ATLAS software through the Kit Validation engine
NASA Astrophysics Data System (ADS)
De Salvo, Alessandro; Brasolin, Franco
2010-04-01
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
Carter, D
1996-01-01
The Canada Center for Remote Sensing, in collaboration with the International Development Research Center, is developing an electronic atlas of Agenda 21, the Earth Summit action plan. This initiative promises to ease access for researchers and practitioners to implement the Agenda 21-action plan, which in its pilot study will focus on biological diversity. Known as the Biodiversity Volume of the Electronic Atlas of Agenda 21 (ELADA 21), this computer software technology will contain information and data on biodiversity, genetics, species, ecosystems, and ecosystem services. Specifically, it includes several country studies, documentation, as well as interactive scenarios linking biodiversity to socioeconomic issues. ELADA 21 will empower countries and agencies to report on and better manage biodiversity and related information. The atlas can be used to develop and test various scenarios and to exchange information within the South and with industrialized countries. At present, ELADA 21 has generated interest and becomes more available in the market. The challenge confronting the project team, however, is to find the atlas a permanent home, a country or agency willing to assume responsibility for maintaining, upgrading, and updating the software.
Efficient Multi-Atlas Registration using an Intermediate Template Image
Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.
2017-01-01
Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3–4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects. PMID:28943702
Efficient multi-atlas registration using an intermediate template image
NASA Astrophysics Data System (ADS)
Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.
2017-03-01
Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3-4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects.
2011-06-27
CAPE CANAVERAL, Fla., -- Workers transport NASA's Juno spacecraft from Astrotech's Payload Processing Facility in Titusville, Fla., to the Hazardous Processing Facility for fueling. The spacecraft will be loaded with the propellant necessary for orbit maneuvers and the attitude control system. Juno is scheduled to launch aboard a United Launch Alliance Atlas V rocket from Cape Canaveral, Fla., Aug. 5.The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information visit: www.nasa.gov/juno. Photo credit: NASA/Troy Cryder
2011-06-27
CAPE CANAVERAL, Fla., -- Workers transport NASA's Juno spacecraft from Astrotech's Payload Processing Facility in Titusville, Fla., to the Hazardous Processing Facility for fueling. The spacecraft will be loaded with the propellant necessary for orbit maneuvers and the attitude control system. Juno is scheduled to launch aboard a United Launch Alliance Atlas V rocket from Cape Canaveral, Fla., Aug. 5.The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information visit: www.nasa.gov/juno. Photo credit: NASA/Troy Cryder
2011-06-27
CAPE CANAVERAL, Fla. -- Workers prepare to transport NASA's Juno spacecraft from Astrotech's Payload Processing Facility in Titusville, Fla., to the Hazardous Processing Facility for fueling. The spacecraft will be loaded with the propellant necessary for orbit maneuvers and the attitude control system. Juno is scheduled to launch aboard a United Launch Alliance Atlas V rocket from Cape Canaveral, Fla., Aug. 5.The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information visit: www.nasa.gov/juno. Photo credit: NASA/Troy Cryder
2011-06-27
CAPE CANAVERAL, Fla., -- Workers transport NASA's Juno spacecraft from Astrotech's Payload Processing Facility in Titusville, Fla., to the Hazardous Processing Facility for fueling. The spacecraft will be loaded with the propellant necessary for orbit maneuvers and the attitude control system. Juno is scheduled to launch aboard a United Launch Alliance Atlas V rocket from Cape Canaveral, Fla., Aug. 5.The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information visit: www.nasa.gov/juno. Photo credit: NASA/Troy Cryder
2011-06-27
CAPE CANAVERAL, Fla., -- Workers transport NASA's Juno spacecraft from Astrotech's Payload Processing Facility in Titusville, Fla., to the Hazardous Processing Facility for fueling. The spacecraft will be loaded with the propellant necessary for orbit maneuvers and the attitude control system. Juno is scheduled to launch aboard a United Launch Alliance Atlas V rocket from Cape Canaveral, Fla., Aug. 5.The solar-powered spacecraft will orbit Jupiter's poles 33 times to find out more about the gas giant's origins, structure, atmosphere and magnetosphere and investigate the existence of a solid planetary core. For more information visit: www.nasa.gov/juno. Photo credit: NASA/Troy Cryder
The ATLAS Event Service: A new approach to event processing
NASA Astrophysics Data System (ADS)
Calafiura, P.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.
2015-12-01
The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources. Notice: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.
2015-12-01
During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.
NASA Astrophysics Data System (ADS)
Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.
2012-06-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker to centralize all communication between modules. The result is an intelligent system able to extract and compute relevant information from the flow of operational data to provide real-time feedback to human experts who can promptly react when needed. The paper presents the design and implementation of the AAL project, together with the results of its usage as automated monitoring assistant for the ATLAS data taking infrastructure.
GOES-R ITAR Photos for Media Day
2016-09-26
The Geostationary Operational Environmental Satellite (GOES-R) is undergoing final launch preparations prior to fueling inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Uncrating and Move to Vertical
2016-08-23
The GOES-R spacecraft is inspected after being uncrated and raised to vertical inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-09-15
The Geostationary Operational Environmental Satellite (GOES-R) is lifted to the vertical position on an “up-ender” inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-10-21
Team members with United Launch Alliance (ULA) prepare the Geostationary Operational Environmental Satellite (GOES-R) for encapsulation in the payload fairing inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a ULA Atlas V rocket in November.
GOES-R Uncrating and Move to Vertical
2016-08-23
Team members monitor progress as the GOES-R spacecraft is lifted from horizontal to vertical inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Uncrating and Move to Vertical
2016-08-23
Team members monitor progress as the GOES-R spacecraft is raised to vertical inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-09-26
Team members with United Launch Alliance (ULA) inspect the first half of the fairing for the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a ULA Atlas V rocket in November.
2016-09-15
The Geostationary Operational Environmental Satellite (GOES-R) is raised to the vertical position on an “up-ender” inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-23
Team members monitor progress as an overhead crane lowers the GOES-R spacecraft into its work stand inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-23
Team members monitor progress as an overhead crane lowers the GOES-R spacecraft toward its work stand inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-23
An overhead crane lifts the GOES-R spacecraft to move it into its work stand inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-23
An overhead crane is positioned to move the GOES-R spacecraft into its work stand inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA Geostationary Operational Environmental Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-09-15
The Geostationary Operational Environmental Satellite (GOES-R) has been secured in the vertical position on an “up-ender” inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-08-22
A truck with a specialized transporter drives away from an Air Force C-5 Galaxy transport plane at the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida to deliver the GOES-R spacecraft for launch processing. The GOES series are weather satellites operated by NOAA to enhance forecasts. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
2016-09-15
Team members are securing the Geostationary Operational Environmental Satellite (GOES-R) in the vertical position on an “up-ender” inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
Extreme I/O on HPC for HEP using the Burst Buffer at NERSC
NASA Astrophysics Data System (ADS)
Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho
2017-10-01
In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.
Improved charge breeding efficiency of light ions with an electron cyclotron resonance ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondrasek, R.; Kutsaev, Sergey; Delahaye, P.
2012-11-15
The Californium Rare Isotope Breeder Upgrade is a new radioactive beam facility for the Argonne Tandem Linac Accelerator System (ATLAS). The facility utilizes a {sup 252}Cf fission source coupled with an electron cyclotron resonance ion source to provide radioactive beam species for the ATLAS experimental program. The californium fission fragment distribution provides nuclei in the mid-mass range which are difficult to extract from production targets using the isotope separation on line technique and are not well populated by low-energy fission of uranium. To date the charge breeding program has focused on optimizing these mid-mass beams, achieving high charge breeding efficienciesmore » of both gaseous and solid species including 14.7% for the radioactive species {sup 143}Ba{sup 27+}. In an effort to better understand the charge breeding mechanism, we have recently focused on the low-mass species sodium and potassium which up to present have been difficult to charge breed efficiently. Unprecedented charge breeding efficiencies of 10.1% for {sup 23}Na{sup 7+} and 17.9% for {sup 39}K{sup 10+} were obtained injecting stable Na{sup +} and K{sup +} beams from a surface ionization source.« less
Improved charge breeding efficiency of light ions with an electron cyclotron resonance ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondrasek, R.; Delahaye, P.; Kutsaev, Sergey
2012-11-01
The Californium Rare Isotope Breeder Upgrade is a new radioactive beam facility for the Argonne Tandem Linac Accelerator System (ATLAS). The facility utilizes a 252Cf fission source coupled with an electron cyclotron resonance ion source to provide radioactive beam species for the ATLAS experimental program. The californium fission fragment distribution provides nuclei in the mid-mass range which are difficult to extract from production targets using the isotope separation on line technique and are not well populated by low-energy fission of uranium. To date the charge breeding program has focused on optimizing these mid-mass beams, achieving high charge breeding efficiencies ofmore » both gaseous and solid species including 14.7% for the radioactive species 143Ba27+. In an effort to better understand the charge breeding mechanism, we have recently focused on the low-mass species sodium and potassium which up to present have been difficult to charge breed efficiently. Unprecedented charge breeding efficiencies of 10.1% for 23Na7+ and 17.9% for 39K10+ were obtained injecting stable Na+ and K+ beams from a surface ionization source.« less
Development of an AMS method to study oceanic circulation characteristics using cosmogenic 39Ar
Collon, P.H.; Bichler, M.; Caggiano, J.; Cecil, L.D.; El, Masri Y.; Golser, R.; Jiang, C.L.; Heinz, A.; Henderson, D.; Kutschera, W.; Lehmann, B.E.; Leleux, P.; Loosli, H.H.; Pardo, R.C.; Paul, M.; Rehm, K.E.; Schlosser, P.; Scott, R.H.; Smethie, W.M.; Vondrasek, R.
2004-01-01
Initial experiments at the ATLAS facility [Nucl. Instr. and Meth. B 92 (1994) 241] resulted in a clear detection of cosmogenic 39Ar signal at the natural level. The present paper summarizes the recent developments of 39Ar AMS measurements at ATLAS: the use of an electron cyclotron resonance (ECR) positive ion source equipped with a special quartz liner to reduce 39K background, the development of a gas handling system for small volume argon samples, the acceleration of 39Ar8+ ions to 232 MeV, and the final separation of 39Ar from 39K in a gas-filled spectrograph. The first successful AMS measurements of 39Ar in ocean water samples from the Southern Atlantic ventilation experiment (SAVE) are reported. Published by Elsevier B.V.
2005-12-17
KENNEDY SPACE CENTER, FLA. - A Lockheed Martin Atlas V launch vehicle in the Vertical Integration Facility awaits the arrival of New Horizons at Complex 41 on Cape Canaveral Air Force Station. New Horizons carries seven scientific instruments that will characterize the global geology and geomorphology of Pluto and its moon Charon, map their surface compositions and temperatures, and examine Pluto's complex atmosphere. After that, flybys of Kuiper Belt objects from even farther in the solar system may be undertaken in an extended mission. New Horizons is the first mission in NASA's New Frontiers program of medium-class planetary missions. The spacecraft, designed for NASA by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., will launch aboard a Lockheed Martin Atlas V rocket and fly by Pluto and Charon as early as summer 2015.
2005-12-17
KENNEDY SPACE CENTER, FLA. - New Horizons arrives at the Vertical Integration Facility at Complex 41 on Cape Canaveral Air Force Station where buildup of its Lockheed Martin Atlas V launch vehicle is complete. New Horizons carries seven scientific instruments that will characterize the global geology and geomorphology of Pluto and its moon Charon, map their surface compositions and temperatures, and examine Pluto's complex atmosphere. After that, flybys of Kuiper Belt objects from even farther in the solar system may be undertaken in an extended mission. New Horizons is the first mission in NASA's New Frontiers program of medium-class planetary missions. The spacecraft, designed for NASA by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., will launch aboard a Lockheed Martin Atlas V rocket and fly by Pluto and Charon as early as summer 2015.
2005-12-17
KENNEDY SPACE CENTER, FLA. - The fairing enclosing New Horizons arrives at the top of a Lockheed Martin Atlas V launch vehicle in the Vertical Integration Facility at Complex 41 on Cape Canaveral Air Force Station. New Horizons carries seven scientific instruments that will characterize the global geology and geomorphology of Pluto and its moon Charon, map their surface compositions and temperatures, and examine Pluto's complex atmosphere. After that, flybys of Kuiper Belt objects from even farther in the solar system may be undertaken in an extended mission. New Horizons is the first mission in NASA's New Frontiers program of medium-class planetary missions. The spacecraft, designed for NASA by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., will launch aboard a Lockheed Martin Atlas V rocket and fly by Pluto and Charon as early as summer 2015.
NASA Astrophysics Data System (ADS)
Rynge, M.; Juve, G.; Kinney, J.; Good, J.; Berriman, B.; Merrihew, A.; Deelman, E.
2014-05-01
In this paper, we describe how to leverage cloud resources to generate large-scale mosaics of the galactic plane in multiple wavelengths. Our goal is to generate a 16-wavelength infrared Atlas of the Galactic Plane at a common spatial sampling of 1 arcsec, processed so that they appear to have been measured with a single instrument. This will be achieved by using the Montage image mosaic engine process observations from the 2MASS, GLIMPSE, MIPSGAL, MSX and WISE datasets, over a wavelength range of 1 μm to 24 μm, and by using the Pegasus Workflow Management System for managing the workload. When complete, the Atlas will be made available to the community as a data product. We are generating images that cover ±180° in Galactic longitude and ±20° in Galactic latitude, to the extent permitted by the spatial coverage of each dataset. Each image will be 5°x5° in size (including an overlap of 1° with neighboring tiles), resulting in an atlas of 1,001 images. The final size will be about 50 TBs. This paper will focus on the computational challenges, solutions, and lessons learned in producing the Atlas. To manage the computation we are using the Pegasus Workflow Management System, a mature, highly fault-tolerant system now in release 4.2.2 that has found wide applicability across many science disciplines. A scientific workflow describes the dependencies between the tasks and in most cases the workflow is described as a directed acyclic graph, where the nodes are tasks and the edges denote the task dependencies. A defining property for a scientific workflow is that it manages data flow between tasks. Applied to the galactic plane project, each 5 by 5 mosaic is a Pegasus workflow. Pegasus is used to fetch the source images, execute the image mosaicking steps of Montage, and store the final outputs in a storage system. As these workflows are very I/O intensive, care has to be taken when choosing what infrastructure to execute the workflow on. In our setup, we choose to use dynamically provisioned compute clusters running on the Amazon Elastic Compute Cloud (EC2). All our instances are using the same base image, which is configured to come up as a master node by default. The master node is a central instance from where the workflow can be managed. Additional worker instances are provisioned and configured to accept work assignments from the master node. The system allows for adding/removing workers in an ad hoc fashion, and could be run in large configurations. To-date we have performed 245,000 CPU hours of computing and generated 7,029 images and totaling 30 TB. With the current set up our runtime would be 340,000 CPU hours for the whole project. Using spot m2.4xlarge instances, the cost would be approximately $5,950. Using faster AWS instances, such as cc2.8xlarge could potentially decrease the total CPU hours and further reduce the compute costs. The paper will explore these tradeoffs.
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Purdie, Thomas G.
2017-01-01
Automating the radiotherapy treatment planning process is a technically challenging problem. The majority of automated approaches have focused on customizing and inferring dose volume objectives to be used in plan optimization. In this work we outline a multi-patient atlas-based dose prediction approach that learns to predict the dose-per-voxel for a novel patient directly from the computed tomography planning scan without the requirement of specifying any objectives. Our method learns to automatically select the most effective atlases for a novel patient, and then map the dose from those atlases onto the novel patient. We extend our previous work to include a conditional random field for the optimization of a joint distribution prior that matches the complementary goals of an accurately spatially distributed dose distribution while still adhering to the desired dose volume histograms. The resulting distribution can then be used for inverse-planning with a new spatial dose objective, or to create typical dose volume objectives for the canonical optimization pipeline. We investigated six treatment sites (633 patients for training and 113 patients for testing) and evaluated the mean absolute difference in all DVHs for the clinical and predicted dose distribution. The results on average are favorable in comparison to our previous approach (1.91 versus 2.57). Comparing our method with and without atlas-selection further validates that atlas-selection improved dose prediction on average in whole breast (0.64 versus 1.59), prostate (2.13 versus 4.07), and rectum (1.46 versus 3.29) while it is less important in breast cavity (0.79 versus 0.92) and lung (1.33 versus 1.27) for which there is high conformity and minimal dose shaping. In CNS brain, atlas-selection has the potential to be impactful (3.65 versus 5.09), but selecting the ideal atlas is the most challenging.
NASA Astrophysics Data System (ADS)
Avolio, G.; Corso Radu, A.; Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.
2012-12-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of more than 20000 applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all the TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures. During data taking runs, streams of information messages sent or published by running applications are the main sources of knowledge about correctness of running operations. The huge flow of operational monitoring data produced is constantly monitored by experts in order to detect problems or misbehaviours. Given the scale of the system and the rates of data to be analyzed, the automation of the system functionality in the areas of operational monitoring, system verification, error detection and recovery is a strong requirement. To accomplish its objective, the Controls system includes some high-level components which are based on advanced software technologies, namely the rule-based Expert System and the Complex Event Processing engines. The chosen techniques allow to formalize, store and reuse the knowledge of experts and thus to assist the shifters in the ATLAS control room during the data-taking activities.
Atlas of neuroanatomy with radiologic correlation and pathologic illustration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dublin, A.B.; Dublin, W.B.
1982-01-01
This atlas correlates gross neuroanatomic specimens with radiographs and computed tomographic scans. Pathologic specimens and radiographs are displayed in a similar manner. The first chapter, on embryology, shows the development of the telencephalon, diencephalon, mesencephalon, and metencephalon through a series of overlays. The anatomical section shows the surface of the brain, the ventricles and their adjacent structures, and the vascular system. CT anatomy is demonstrated by correlating CT scans with pathologic brain specimens cut in the axial plane. Pathologic changes associated with congenital malformations, injections, injuries, tumors, and other causes are demonstrated in the last six chapters.
NASA Astrophysics Data System (ADS)
Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2018-03-01
The Extrapolar SWIFT model is a fast ozone chemistry scheme for interactive calculation of the extrapolar stratospheric ozone layer in coupled general circulation models (GCMs). In contrast to the widely used prescribed ozone, the SWIFT ozone layer interacts with the model dynamics and can respond to atmospheric variability or climatological trends.The Extrapolar SWIFT model employs a repro-modelling approach, in which algebraic functions are used to approximate the numerical output of a full stratospheric chemistry and transport model (ATLAS). The full model solves a coupled chemical differential equation system with 55 initial and boundary conditions (mixing ratio of various chemical species and atmospheric parameters). Hence the rate of change of ozone over 24 h is a function of 55 variables. Using covariances between these variables, we can find linear combinations in order to reduce the parameter space to the following nine basic variables: latitude, pressure altitude, temperature, overhead ozone column and the mixing ratio of ozone and of the ozone-depleting families (Cly, Bry, NOy and HOy). We will show that these nine variables are sufficient to characterize the rate of change of ozone. An automated procedure fits a polynomial function of fourth degree to the rate of change of ozone obtained from several simulations with the ATLAS model. One polynomial function is determined per month, which yields the rate of change of ozone over 24 h. A key aspect for the robustness of the Extrapolar SWIFT model is to include a wide range of stratospheric variability in the numerical output of the ATLAS model, also covering atmospheric states that will occur in a future climate (e.g. temperature and meridional circulation changes or reduction of stratospheric chlorine loading).For validation purposes, the Extrapolar SWIFT model has been integrated into the ATLAS model, replacing the full stratospheric chemistry scheme. Simulations with SWIFT in ATLAS have proven that the systematic error is small and does not accumulate during the course of a simulation. In the context of a 10-year simulation, the ozone layer simulated by SWIFT shows a stable annual cycle, with inter-annual variations comparable to the ATLAS model. The application of Extrapolar SWIFT requires the evaluation of polynomial functions with 30-100 terms. Computers can currently calculate such polynomial functions at thousands of model grid points in seconds. SWIFT provides the desired numerical efficiency and computes the ozone layer 104 times faster than the chemistry scheme in the ATLAS CTM.
2011-07-14
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, a forklift lifts the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission into the MMRTG trailer. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being moved to the RTG storage facility following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-12
CAPE CANAVERAL, Fla. -- In the high bay of the RTG storage facility at NASA's Kennedy Space Center in Florida, the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is enclosed in a protective mesh container, known as the "gorilla cage," for transport to the Payload Hazardous Servicing Facility (PHSF). The cage protects the MMRTG and allows any excess heat generated to dissipate into the air. In the PHSF, the MMRTG temporarily will be installed on the MSL rover, Curiosity, for a fit check but will be installed on the rover for launch at the pad. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is planned for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Cory Huston
2011-07-14
CAPE CANAVERAL, Fla. -- In the airlock of the Payload Hazardous Servicing Facility (PHSF) at NASA's Kennedy Space Center in Florida, t he multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission awaits transport to the RTG storage facility. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG was in the PHSF for a fit check on MSL's Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift transfers the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission from the airlock of the Payload Hazardous Servicing Facility (PHSF) at NASA's Kennedy Space Center in Florida to the MMRTG trailer. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being moved to the RTG storage facility following a fit check on MSL's Curiosity rover in the PHSF. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift moves the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission into the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift moves the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission from the MMRTG trailer to the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift carrying the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission backs away from the airlock of the Payload Hazardous Servicing Facility (PHSF) at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being moved to the RTG storage facility following a fit check on MSL's Curiosity rover in the PHSF. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- Department of Energy workers park the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission in the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- In the high bay of the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida, the mesh container enclosing the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is lifted from around the MMRTG. The container, known as the "gorilla cage," protects the MMRTG during transport and allows any excess heat generated to dissipate into the air. The cage is being removed following the return of the MMRTG to the RTGF from a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift approaches the airlock of the Payload Hazardous Servicing Facility (PHSF) at NASA's Kennedy Space Center in Florida where the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission awaits transport to the RTG storage facility. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG was in the PHSF for a fit check on MSL's Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift moves into position to lift the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission from the floor of the Payload Hazardous Servicing Facility (PHSF) airlock at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being transported to the RTG storage facility following a fit check on MSL's Curiosity rover in the PHSF. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- The multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is lifted from the MMRTG trailer at the RTG storage facility (RTGF) at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is returning to the RTGF following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-13
CAPE CANAVERAL, Fla. -- In the airlock of the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center in Florida, Department of Energy employees prepare the support base of the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission for installation of the mesh container, known as the "gorilla cage." The cage, in the background at right, protects the MMRTG during transport and allows any excess heat generated to dissipate into the air. Transport of the MMRTG to the RTG storage facility follows the completion of the MMRTG fit check on the Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Kim Shiflett
2011-07-14
CAPE CANAVERAL, Fla. -- A forklift moves into position to lift the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission from the floor of the Payload Hazardous Servicing Facility (PHSF) airlock at NASA's Kennedy Space Center in Florida. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being moved to the RTG storage facility following a fit check on MSL's Curiosity rover in the PHSF. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-14
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, a forklift lifts the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission into the MMRTG trailer. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being moved to the RTG storage facility following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-12
CAPE CANAVERAL, Fla. -- Outside the RTG storage facility at NASA's Kennedy Space Center in Florida, the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission, enclosed in the protective mesh container known as the "gorilla cage," is strapped down inside the MMRTG trailer for transport to the Payload Hazardous Servicing Facility (PHSF). The cage protects the MMRTG and allows any excess heat generated to dissipate into the air. In the PHSF, the MMRTG temporarily will be installed on the MSL rover, Curiosity, for a fit check but will be installed on the rover for launch at the pad. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is planned for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Cory Huston
2011-07-14
CAPE CANAVERAL, Fla. -- At NASA's Kennedy Space Center in Florida, preparations are under way to secure the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission in the MMRTG trailer. The MMRTG is enclosed in a mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG is being moved to the RTG storage facility following a fit check on MSL's Curiosity rover in the Payload Hazardous Servicing Facility (PHSF). The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. MSL's components include a compact car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Troy Cryder
2011-07-12
CAPE CANAVERAL, Fla. -- Workers dressed in clean room attire, known as bunny suits, transfer the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission on its holding base through the doors of the airlock of the Payload Hazardous Servicing Facility (PHSF) into the facility's high bay. In the high bay, the MMRTG temporarily will be installed on the MSL rover, Curiosity (in the background, at right), for a fit check using the MMRTG integration cart (in the background, at left). The MMRTG will be installed on the rover for launch at the pad. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is planned for Nov. 25 from Space Launch Complex 41 on Cape Canaveral Air Force Station. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Cory Huston