Sample records for open computing facility

  1. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  2. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  3. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  4. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  5. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  6. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  7. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  8. INTERIOR; DETAIL OF ANTENNA TRUNK OPENING, LOOKING EAST. Naval ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR; DETAIL OF ANTENNA TRUNK OPENING, LOOKING EAST. - Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Helix House No. 2, Base of Radio Antenna Structure No. 427, Makaha, Honolulu County, HI

  9. Aeroacoustic Simulation of a Nose Landing Gear in an Open Jet Facility Using FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.; Carlson, Jan-Renee

    2012-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida s open-jet acoustic facility known as UFAFF. The unstructured-grid flow solver, FUN3D, developed at NASA Langley Research center is used to compute the unsteady flow field for this configuration. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions compare favorably with the measured data. Unsteady flowfield data obtained from the FUN3D code are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the sound pressure levels at microphones placed in the farfield. Significant improvement in predicted noise levels is obtained when the flowfield data from the open jet UFAFF simulations is used as compared to the case using flowfield data from the closed-wall BART configuration.

  10. INTERIOR; VIEW OF ANTENNA TRUNK OPENING AND ENTRY DOOR, LOOKING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR; VIEW OF ANTENNA TRUNK OPENING AND ENTRY DOOR, LOOKING EAST SOUTHEAST. - Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Helix House No. 2, Base of Radio Antenna Structure No. 427, Makaha, Honolulu County, HI

  11. A comparison of parent satisfaction in an open-bay and single-family room neonatal intensive care unit.

    PubMed

    Stevens, Dennis C; Helseth, Carol C; Khan, M Akram; Munson, David P; Reid, E J

    2011-01-01

    The purpose of this research was to test the hypothesis that parental satisfaction with neonatal intensive care is greater in a single-family room facility as compared with a conventional open-bay neonatal intensive care unit (NICU). This investigation was a prospective cohort study comparing satisfaction survey results for parents who responded to a commercially available parent NICU satisfaction survey following the provision of NICU care in open-bay and single-family room facilities. A subset of 16 items indicative of family-centered care was also computed and compared for these two NICU facilities. Parents whose babies received care in the single-family room facility expressed significantly improved survey responses in regard to the NICU environment, overall assessment of care, and total survey score than did parents of neonates in the open-bay facility. With the exception of the section on nursing in which scores in both facilities were high, nonsignificant improvement in median scores for the sections on delivery, physicians, discharge planning, and personal issues were noted. The total median item score for family-centered care was significantly greater in the single-family room than the open-bay facility. Parental satisfaction with care in the single-family room NICU was improved in comparison with the traditional open-bay NICU. The single-family room environment appears more conducive to the provision of family-centered care. Improved parental satisfaction with care and the potential for enhanced family-centered care need to be considered in decisions made regarding the configuration of NICU facilities in the future.

  12. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  13. Teachers Training through Distance Mode in Allama Iqbal Open University (AIOU) Pakistan: A Case Study

    ERIC Educational Resources Information Center

    Jumani, Nabi Bux; Rahman, Fazalur; Chishti, Saeedul Hasan; Malik, Samina

    2011-01-01

    Allama Iqbal Open University (AIOU) is the first Open University in Asia and established in 1974 on the model of UKOU. AIOU uses different media for the delivery of instruction. It has a well established Institute of Educational Technology which has radio and TV production facilities and advance level of work in computer technology. AOU offers…

  14. Closed Loop Experiment Manager (CLEM)-An Open and Inexpensive Solution for Multichannel Electrophysiological Recordings and Closed Loop Experiments.

    PubMed

    Hazan, Hananel; Ziv, Noam E

    2017-01-01

    There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level.

  15. Closed Loop Experiment Manager (CLEM)—An Open and Inexpensive Solution for Multichannel Electrophysiological Recordings and Closed Loop Experiments

    PubMed Central

    Hazan, Hananel; Ziv, Noam E.

    2017-01-01

    There is growing need for multichannel electrophysiological systems that record from and interact with neuronal systems in near real-time. Such systems are needed, for example, for closed loop, multichannel electrophysiological/optogenetic experimentation in vivo and in a variety of other neuronal preparations, or for developing and testing neuro-prosthetic devices, to name a few. Furthermore, there is a need for such systems to be inexpensive, reliable, user friendly, easy to set-up, open and expandable, and possess long life cycles in face of rapidly changing computing environments. Finally, they should provide powerful, yet reasonably easy to implement facilities for developing closed-loop protocols for interacting with neuronal systems. Here, we survey commercial and open source systems that address these needs to varying degrees. We then present our own solution, which we refer to as Closed Loop Experiments Manager (CLEM). CLEM is an open source, soft real-time, Microsoft Windows desktop application that is based on a single generic personal computer (PC) and an inexpensive, general-purpose data acquisition board. CLEM provides a fully functional, user-friendly graphical interface, possesses facilities for recording, presenting and logging electrophysiological data from up to 64 analog channels, and facilities for controlling external devices, such as stimulators, through digital and analog interfaces. Importantly, it includes facilities for running closed-loop protocols written in any programming language that can generate dynamic link libraries (DLLs). We describe the application, its architecture and facilities. We then demonstrate, using networks of cortical neurons growing on multielectrode arrays (MEA) that despite its reliance on generic hardware, its performance is appropriate for flexible, closed-loop experimentation at the neuronal network level. PMID:29093659

  16. On the Integration of Remote Experimentation into Undergraduate Laboratories--Pedagogical Approach

    ERIC Educational Resources Information Center

    Esche, Sven K.

    2005-01-01

    This paper presents an Internet-based open approach to laboratory instruction. In this article, the author talks about an open laboratory approach using a multi-user multi-device remote facility. This approach involves both the direct contact with the computer-controlled laboratory setup of interest with the students present in the laboratory…

  17. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  18. The open science grid

    NASA Astrophysics Data System (ADS)

    Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-07-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  19. Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.

  20. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  1. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  2. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  3. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  4. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  5. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  6. WINDS: A Web-Based Intelligent Interactive Course on Data-Structures

    ERIC Educational Resources Information Center

    Sirohi, Vijayalaxmi

    2007-01-01

    The Internet has opened new ways of learning and has brought several advantages to computer-aided education. Global access, self-paced learning, asynchronous teaching, interactivity, and multimedia usage are some of these. Along with the advantages comes the challenge of designing the software using the available facilities. Integrating online…

  7. DEVELOPING A CAPE-OPEN COMPLIANT METAL FINISHING FACILITY POLLUTION PREVENTION TOOL (CO-MFFP2T)

    EPA Science Inventory

    The USEPA is developing a Computer Aided Process Engineering (CAPE) software tool for the metal finishing industry that helps users design efficient metal finishing processes that are less polluting to the environment. Metal finishing process lines can be simulated and evaluated...

  8. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  9. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  10. Flow analysis of airborne particles in a hospital operating room

    NASA Astrophysics Data System (ADS)

    Faeghi, Shiva; Lennerts, Kunibert

    2016-06-01

    Preventing airborne infections during a surgery has been always an important issue to deliver effective and high quality medical care to the patient. One of the important sources of infection is particles that are distributed through airborne routes. Factors influencing infection rates caused by airborne particles, among others, are efficient ventilation and the arrangement of surgical facilities inside the operating room. The paper studies the ventilation airflow pattern in an operating room in a hospital located in Tehran, Iran, and seeks to find the efficient configurations with respect to the ventilation system and layout of facilities. This study uses computational fluid dynamics (CFD) and investigates the effects of different inflow velocities for inlets, two pressurization scenarios (equal and excess pressure) and two arrangements of surgical facilities in room while the door is completely open. The results show that system does not perform adequately when the door is open in the operating room under the current conditions, and excess pressure adjustments should be employed to achieve efficient results. The findings of this research can be discussed in the context of design and controlling of the ventilation facilities of operating rooms.

  11. Teacher's Corner: Structural Equation Modeling with the Sem Package in R

    ERIC Educational Resources Information Center

    Fox, John

    2006-01-01

    R is free, open-source, cooperatively developed software that implements the S statistical programming language and computing environment. The current capabilities of R are extensive, and it is in wide use, especially among statisticians. The sem package provides basic structural equation modeling facilities in R, including the ability to fit…

  12. Raise the Bar

    ERIC Educational Resources Information Center

    Williams, Dana

    2004-01-01

    Detroit's Benjamin Carson Academy (BCA) is believed to be the nation's first charter school for juvenile offenders. Opened in 1999, BCA is housed in the newly built Wayne County Juvenile Detention Facility, a state of the art, 89,300-square-foot building in downtown Detroit with half a dozen gymnasiums, two computer labs, a media center, mental…

  13. Swipe In, Tap Out: Advancing Student Entrepreneurship in the CIS Sandbox

    ERIC Educational Resources Information Center

    Charlebois, Conner; Hentschel, Nicholas; Frydenberg, Mark

    2014-01-01

    The Computer Information Systems Learning and Technology Sandbox (CIS Sandbox) opened as a collaborative learning lab during the fall 2011 semester at a New England business university. The facility employs 24 student workers, who, in addition to providing core tutoring services, are encouraged to explore new technologies and take on special…

  14. The Cloud Area Padovana: from pilot to production

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.

    2017-10-01

    The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.

  15. The role of NASA for aerospace information

    NASA Technical Reports Server (NTRS)

    Chandler, G. P., Jr.

    1980-01-01

    The NASA Scientific and Technical Information Program operations are performed by two contractor operated facilities. The NASA STI Facility, located near Baltimore, Maryland, employs about 210 people who process report literature, operate the computer complex, and provide support for software maintenance and developments. A second contractor, the Technical Information Services of the American Institute of Aeronautics and Astronautics, employs approximately 80 people in New York City and processes the open literature such as journals, magazines, and books. Features of these programs include online access via RECON, announcement services, and international document exchange.

  16. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  17. CILogon-HA. Higher Assurance Federated Identities for DOE Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basney, James

    The CILogon-HA project extended the existing open source CILogon service (initially developed with funding from the National Science Foundation) to provide credentials at multiple levels of assurance to users of DOE facilities for collaborative science. CILogon translates mechanism and policy across higher education and grid trust federations, bridging from the InCommon identity federation (which federates university and DOE lab identities) to the Interoperable Global Trust Federation (which defines standards across the Worldwide LHC Computing Grid, the Open Science Grid, and other cyberinfrastructure). The CILogon-HA project expanded the CILogon service to support over 160 identity providers (including 6 DOE facilities) andmore » 3 internationally accredited certification authorities. To provide continuity of operations upon the end of the CILogon-HA project period, project staff transitioned the CILogon service to operation by XSEDE.« less

  18. Planetary Radio Interferometry and Doppler Experiment (PRIDE) technique: A test case of the Mars Express Phobos Flyby. II. Doppler tracking: Formulation of observed and computed values, and noise budget

    NASA Astrophysics Data System (ADS)

    Bocanegra-Bahamón, T. M.; Molera Calvés, G.; Gurvits, L. I.; Duev, D. A.; Pogrebenko, S. V.; Cimò, G.; Dirkx, D.; Rosenblatt, P.

    2018-01-01

    Context. Closed-loop Doppler data obtained by deep space tracking networks, such as the NASA Deep Space Network (DSN) and the ESA tracking station network (Estrack), are routinely used for navigation and science applications. By shadow tracking the spacecraft signal, Earth-based radio telescopes involved in the Planetary Radio Interferometry and Doppler Experiment (PRIDE) can provide open-loop Doppler tracking data only when the dedicated deep space tracking facilities are operating in closed-loop mode. Aims: We explain the data processing pipeline in detail and discuss the capabilities of the technique and its potential applications in planetary science. Methods: We provide the formulation of the observed and computed values of the Doppler data in PRIDE tracking of spacecraft and demonstrate the quality of the results using an experiment with the ESA Mars Express spacecraft as a test case. Results: We find that the Doppler residuals and the corresponding noise budget of the open-loop Doppler detections obtained with the PRIDE stations compare to the closed-loop Doppler detections obtained with dedicated deep space tracking facilities.

  19. Aerodynamic design guidelines and computer program for estimation of subsonic wind tunnel performance

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.; Mort, K. W.; Jope, J.

    1976-01-01

    General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.

  20. Assessing the uptake of persistent identifiers by research infrastructure users

    PubMed Central

    Maull, Keith E.

    2017-01-01

    Significant progress has been made in the past few years in the development of recommendations, policies, and procedures for creating and promoting citations to data sets, software, and other research infrastructures like computing facilities. Open questions remain, however, about the extent to which referencing practices of authors of scholarly publications are changing in ways desired by these initiatives. This paper uses four focused case studies to evaluate whether research infrastructures are being increasingly identified and referenced in the research literature via persistent citable identifiers. The findings of the case studies show that references to such resources are increasing, but that the patterns of these increases are variable. In addition, the study suggests that citation practices for data sets may change more slowly than citation practices for software and research facilities, due to the inertia of existing practices for referencing the use of data. Similarly, existing practices for acknowledging computing support may slow the adoption of formal citations for computing resources. PMID:28394907

  1. Alternative Fuels Data Center: Ryder Opens Natural Gas Vehicle Maintenance

    Science.gov Websites

    Facility Ryder Opens Natural Gas Vehicle Maintenance Facility to someone by E-mail Share Alternative Fuels Data Center: Ryder Opens Natural Gas Vehicle Maintenance Facility on Facebook Tweet about Alternative Fuels Data Center: Ryder Opens Natural Gas Vehicle Maintenance Facility on Twitter Bookmark

  2. Recovery, Transportation and Acceptance to the Curation Facility of the Hayabusa Re-Entry Capsule

    NASA Technical Reports Server (NTRS)

    Abe, M.; Fujimura, A.; Yano, H.; Okamoto, C.; Okada, T.; Yada, T.; Ishibashi, Y.; Shirai, K.; Nakamura, T.; Noguchi, T.; hide

    2011-01-01

    The "Hayabusa" re-entry capsule was safely carried into the clean room of Sagamihara Planetary Sample Curation Facility in JAXA on June 18, 2010. After executing computed tomographic (CT) scanning, removal of heat shield, and surface cleaning of sample container, the sample container was enclosed into the clean chamber. After opening the sample container and residual gas sampling in the clean chamber, optical observation, sample recovery, sample separation for initial analysis will be performed. This curation work is continuing for several manths with some selected member of Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET). We report here on the 'Hayabusa' capsule recovery operation, and transportation and acceptance at the curation facility of the Hayabusa re-entry capsule.

  3. CFD-CAA Coupled Calculations of a Tandem Cylinder Configuration to Assess Facility Installation Effects

    NASA Technical Reports Server (NTRS)

    Redonnet, Stephane; Lockard, David P.; Khorrami, Mehdi R.; Choudhari, Meelan M.

    2011-01-01

    This paper presents a numerical assessment of acoustic installation effects in the tandem cylinder (TC) experiments conducted in the NASA Langley Quiet Flow Facility (QFF), an open-jet, anechoic wind tunnel. Calculations that couple the Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA) of the TC configuration within the QFF are conducted using the CFD simulation results previously obtained at NASA LaRC. The coupled simulations enable the assessment of installation effects associated with several specific features in the QFF facility that may have impacted the measured acoustic signature during the experiment. The CFD-CAA coupling is based on CFD data along a suitably chosen surface, and employs a technique that was recently improved to account for installed configurations involving acoustic backscatter into the CFD domain. First, a CFD-CAA calculation is conducted for an isolated TC configuration to assess the coupling approach, as well as to generate a reference solution for subsequent assessments of QFF installation effects. Direct comparisons between the CFD-CAA calculations associated with the various installed configurations allow the assessment of the effects of each component (nozzle, collector, etc.) or feature (confined vs. free jet flow, etc.) characterizing the NASA LaRC QFF facility.

  4. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  5. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  6. Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, James S.; Leverman, Dustin B.; Hanley, Jesse A.

    This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality ofmore » both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.« less

  7. Safety Precautions and Operating Procedures in an (A)BSL-4 Laboratory: 4. Medical Imaging Procedures.

    PubMed

    Byrum, Russell; Keith, Lauren; Bartos, Christopher; St Claire, Marisa; Lackemeyer, Matthew G; Holbrook, Michael R; Janosko, Krisztina; Barr, Jason; Pusl, Daniela; Bollinger, Laura; Wada, Jiro; Coe, Linda; Hensley, Lisa E; Jahrling, Peter B; Kuhn, Jens H; Lentz, Margaret R

    2016-10-03

    Medical imaging using animal models for human diseases has been utilized for decades; however, until recently, medical imaging of diseases induced by high-consequence pathogens has not been possible. In 2014, the National Institutes of Health, National Institute of Allergy and Infectious Diseases, Integrated Research Facility at Fort Detrick opened an Animal Biosafety Level 4 (ABSL-4) facility to assess the clinical course and pathology of infectious diseases in experimentally infected animals. Multiple imaging modalities including computed tomography (CT), magnetic resonance imaging, positron emission tomography, and single photon emission computed tomography are available to researchers for these evaluations. The focus of this article is to describe the workflow for safely obtaining a CT image of a live guinea pig in an ABSL-4 facility. These procedures include animal handling, anesthesia, and preparing and monitoring the animal until recovery from sedation. We will also discuss preparing the imaging equipment, performing quality checks, communication methods from "hot side" (containing pathogens) to "cold side," and moving the animal from the holding room to the imaging suite.

  8. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  9. Distributed computing testbed for a remote experimental environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butner, D.N.; Casper, T.A.; Howard, B.C.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less

  10. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  11. Solving the competitive facility location problem considering the reactions of competitor with a hybrid algorithm including Tabu Search and exact method

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Niknam, Azar

    2018-03-01

    In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.

  12. TEAM (Technologies Enabling Agile Manufacturing) shop floor control requirements guide: Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-03-28

    TEAM will create a shop floor control system (SFC) to link the pre-production planning to shop floor execution. SFC must meet the requirements of a multi-facility corporation, where control must be maintained between co-located facilities down to individual workstations within each facility. SFC must also meet the requirements of a small corporation, where there may only be one small facility. A hierarchical architecture is required to meet these diverse needs. The hierarchy contains the following levels: Enterprise, Factory, Cell, Station, and Equipment. SFC is focused on the top three levels. Each level of the hierarchy is divided into three basicmore » functions: Scheduler, Dispatcher, and Monitor. The requirements of each function depend on the hierarchical level in which it is to be used. For example, the scheduler at the Enterprise level must allocate production to individual factories and assign due-dates; the scheduler at the Cell level must provide detailed start and stop times of individual operations. Finally the system shall have the following features: distributed and open-architecture. Open architecture software is required in order that the appropriate technology be used at each level of the SFC hierarchy, and even at different instances within the same hierarchical level (for example, Factory A uses discrete-event simulation scheduling software, and Factory B uses an optimization-based scheduler). A distributed implementation is required to reduce the computational burden of the overall system, and allow for localized control. A distributed, open-architecture implementation will also require standards for communication between hierarchical levels.« less

  13. LAMMPS strong scaling performance optimization on Blue Gene/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using anmore » 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.« less

  14. Design and control of rotating soil-like substrate plant-growing facility based on plant water requirement and computational fluid dynamics simulation

    NASA Astrophysics Data System (ADS)

    Hu, Dawei; Li, Leyuan; Liu, Hui; Zhang, Houkai; Fu, Yuming; Sun, Yi; Li, Liang

    It is necessary to process inedible plant biomass into soil-like substrate (SLS) by bio-compost to realize biological resource sustainable utilization. Although similar to natural soil in structure and function, SLS often has uneven water distribution adversely affecting the plant growth due to unsatisfactory porosity, permeability and gravity distribution. In this article, SLS plant-growing facility (SLS-PGF) were therefore rotated properly for cultivating lettuce, and the Brinkman equations coupled with laminar flow equations were taken as governing equations, and boundary conditions were specified by actual operating characteristics of rotating SLS-PGF. Optimal open-control law of the angular and inflow velocity was determined by lettuce water requirement and CFD simulations. The experimental result clearly showed that water content was more uniformly distributed in SLS under the action of centrifugal and Coriolis force, rotating SLS-PGF with the optimal open-control law could meet lettuce water requirement at every growth stage and achieve precise irrigation.

  15. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  16. Resilient workflows for computational mechanics platforms

    NASA Astrophysics Data System (ADS)

    Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine

    2010-06-01

    Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].

  17. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and to reduce the total volume of data communicated. Use of Titan has enabled ECMWF to plan future scalability developments and resource requirements. We will also discuss the best practices developed over the years in navigating logistical, legal and regulatory hurdles involved in supporting the facility's diverse user community.

  18. 40 CFR 63.5799 - How do I calculate my facility's organic HAP emissions on a tpy basis for purposes of determining...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Emissions Factors for Open Molding and Centrifugal Casting § 63.5799 How do I calculate my facility's... new facility that does not have any of the following operations: Open molding, centrifugal casting... existing facilities, do not include any organic HAP emissions where resin or gel coat is applied to an open...

  19. 40 CFR 63.5799 - How do I calculate my facility's organic HAP emissions on a tpy basis for purposes of determining...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Emissions Factors for Open Molding and Centrifugal Casting § 63.5799 How do I calculate my facility's... new facility that does not have any of the following operations: Open molding, centrifugal casting... existing facilities, do not include any organic HAP emissions where resin or gel coat is applied to an open...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Harris, James A; Parete-Koon, Suzanne T

    We describe recent development work on the core-collapse supernova code CHIMERA. CHIMERA has consumed more than 100 million cpu-hours on Oak Ridge Leadership Computing Facility (OLCF) platforms in the past 3 years, ranking it among the most important applications at the OLCF. Most of the work described has been focused on exploiting the multicore nature of the current platform (Jaguar) via, e.g., multithreading using OpenMP. In addition, we have begun a major effort to marshal the computational power of GPUs with CHIMERA. The impending upgrade of Jaguar to Titan a 20+ PF machine with an NVIDIA GPU on many nodesmore » makes this work essential.« less

  1. The Virtual Geophysics Laboratory (VGL): Scientific Workflows Operating Across Organizations and Across Infrastructures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.

    2012-12-01

    The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.

  2. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  3. GAUSSIAN 76: An ab initio Molecular Orbital Program

    DOE R&D Accomplishments Database

    Binkley, J. S.; Whiteside, R.; Hariharan, P. C.; Seeger, R.; Hehre, W. J.; Lathan, W. A.; Newton, M. D.; Ditchfield, R.; Pople, J. A.

    1978-01-01

    Gaussian 76 is a general-purpose computer program for ab initio Hartree-Fock molecular orbital calculations. It can handle basis sets involving s, p and d-type Gaussian functions. Certain standard sets (STO-3G, 4-31G, 6-31G*, etc.) are stored internally for easy use. Closed shell (RHF) or unrestricted open shell (UHF) wave functions can be obtained. Facilities are provided for geometry optimization to potential minima and for limited potential surface scans.

  4. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  5. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  6. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorensek, M.; Hamm, L.; Garcia, H.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come frommore » many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.« less

  7. FermiLib v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN

    FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less

  8. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  9. Open release of the DCA++ project

    NASA Astrophysics Data System (ADS)

    Haehner, Urs; Solca, Raffaele; Staar, Peter; Alvarez, Gonzalo; Maier, Thomas; Summers, Michael; Schulthess, Thomas

    We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA+ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-Tc superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  10. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  11. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  12. LSST Resources for the Community

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne

    2011-01-01

    LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.

  13. Defect Detection in Superconducting Radiofrequency Cavity Surface Using C + + and OpenCV

    NASA Astrophysics Data System (ADS)

    Oswald, Samantha; Thomas Jefferson National Accelerator Facility Collaboration

    2014-03-01

    Thomas Jefferson National Accelerator Facility (TJNAF) uses superconducting radiofrequency (SRF) cavities to accelerate an electron beam. If theses cavities have a small particle or defect, it can degrade the performance of the cavity. The problem at hand is inspecting the cavity for defects, little bubbles of niobium on the surface of the cavity. Thousands of pictures have to be taken of a single cavity and then looked through to see how many defects were found. A C + + program with Open Source Computer Vision (OpenCV) was constructed to reduce the number of hours searching through the images and finds all the defects. Using this code, the SRF group is now able to use the code to identify defects in on-going tests of SRF cavities. Real time detection is the next step so that instead of taking pictures when looking at the cavity, the camera will detect all the defects.

  14. Computer-Aided Facilities Management Systems (CAFM).

    ERIC Educational Resources Information Center

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  15. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  16. DOE/ NREL Build One of the World's Most Energy Efficient Office Spaces

    ScienceCinema

    Radocy, Rachel; Livingston, Brian; von Luhrte, Rich

    2018-05-18

    Technology — from sophisticated computer modeling to advanced windows that actually open — will help the newest building at the U.S. Department of Energy's (DOE) National Renewable Energy Laboratory (NREL) be one of the world's most energy efficient offices. Scheduled to open this summer, the 222,000 square-foot RSF will house more than 800 staff and an energy efficient information technology data center. Because 19 percent of the country's energy is used by commercial buildings, DOE plans to make this facility a showcase for energy efficiency. DOE hopes the design of the RSF will be replicated by the building industry and help reduce the nation's energy consumption by changing the way commercial buildings are designed and built.

  17. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  18. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  19. 3D Surveying, Modeling and Geo-Information System of the New Campus of ITB-Indonesia

    NASA Astrophysics Data System (ADS)

    Suwardhi, D.; Trisyanti, S. W.; Ainiyah, N.; Fajri, M. N.; Hanan, H.; Virtriana, R.; Edmarani, A. A.

    2016-10-01

    The new campus of ITB-Indonesia, which is located at Jatinangor, requires good facilities and infrastructures to supporting all of campus activities. Those can not be separated from procurement and maintenance activities. Technology for procurement and maintenance of facilities and infrastructures -based computer (information system)- has been known as Building Information Modeling (BIM). Nowadays, that technology is more affordable with some of free software that easy to use and tailored to user needs. BIM has some disadvantages and it requires other technologies to complete it, namely Geographic Information System (GIS). BIM and GIS require surveying data to visualized landscape and buildings on Jatinangor ITB campus. This paper presents the on-going of an internal service program conducted by the researcher, academic staff and students for the university. The program including 3D surveying to support the data requirements for 3D modeling of buildings in CityGML and Industry Foundation Classes (IFC) data model. The entire 3D surveying will produce point clouds that can be used to make 3D model. The 3D modeling is divided into low and high levels of detail modeling. The low levels model is stored in 3D CityGML database, and the high levels model including interiors is stored in BIM Server. 3D model can be used to visualized the building and site of Jatinangor ITB campus. For facility management of campus, an geo-information system is developed that can be used for planning, constructing, and maintaining Jatinangor ITB's facilities and infrastructures. The system uses openMAINT, an open source solution for the Property & Facility Management.

  20. Nosocomial infections in geriatric long-term-care and rehabilitation facilities: exploration in the development of a risk index for epidemiological surveillance.

    PubMed

    Golliot, F; Astagneau, P; Cassou, B; Okra, N; Rothan-Tondeur, M; Brücker, G

    2001-12-01

    To compute a risk index for nosocomial infection (NI) surveillance in geriatric long-term-care facilities (LTCFs) and rehabilitation facilities. Analysis of data collected during the French national prevalence survey on NIs conducted in 1996. Risk indices were constructed based on the patient case-mix defined according to risk factors for NIs identified in the elderly. 248 geriatric units in 77 hospitals located in northern France. All hospital inpatients on the day of the survey were included. Data from 11,254 patients were recorded. The overall rate of infected patients was 9.9%. Urinary tract, respiratory tract, and skin were the most common infection sites in both rehabilitation facilities and LTCFs. Eleven risk indices, categorizing patients in 3 to 7 levels of increasing NI risk, ranging from 2.7% to 36.2%, were obtained. Indices offered risk adjustment according to NI rate stratification and clinical relevance of risk factors such as indwelling devices, open bedsores, swallowing disorders, sphincter incontinence, lack of mobility, immunodeficiency, or rehabilitation activity. The optimal index should be tailored to the strategy selected for NI surveillance in geriatric facilities in view of available financial and human resources.

  1. IYA Outreach Plans for Appalachian State University's Observatories

    NASA Astrophysics Data System (ADS)

    Caton, Daniel B.; Pollock, J. T.; Saken, J. M.

    2009-01-01

    Appalachian State University will provide a variety of observing opportunities for the public during the International Year of Astronomy. These will be focused at both the campus GoTo Telescope Facility used by Introductory Astronomy students and the research facilities at our Dark Sky Observatory. The campus facility is composed of a rooftop deck with a roll-off roof housing fifteen Celestron C11 telescopes. During astronomy lab class meetings these telescopes are used either in situ or remotely by computer control from the adjacent classroom. For the IYA we will host the public for regular observing sessions at these telescopes. The research facility features a 32-inch DFM Engineering telescope with its dome attached to the Cline Visitor Center. The Visitor Center is still under construction and we anticipate its completion for a spring opening during IYA. The CVC will provide areas for educational outreach displays and a view of the telescope control room. Visitors will view celestial objects directly at the eyepiece. We are grateful for the support of the National Science Foundation, through grant number DUE-0536287, which provided instrumentation for the GoTO facility, and to J. Donald Cline for support of the Visitor Center.

  2. College and University Facilities Survey. Part 5: Enrollment and Facilities of New Colleges and Universities Opening Between 1961 and 1965.

    ERIC Educational Resources Information Center

    Robbins, Leslie F.; Bokelman, W. Robert

    Facilities data for 181 colleges opened between 1961 and 1965 are summarized. Data from the survey suggests the institutional characteristics, type and purpose of the new colleges, and the trends in enrollment distribution. The facilities of the new colleges are tabulated according to new construction and rehabilitation costs by categories of…

  3. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  4. Ethics in the driver, Mosaic is the vehicle, and network instruction is the precious cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbury, M.; Schmitz, J.

    1994-12-31

    The College of Agriculture at the University of Illinois had an ambitious goal: train 500 incoming freshman students how to take advantage of their computer network privileges without placing undue demands on their time. Because the College already had outstanding computer facilities and expertise, the Department of Academic Programs chose AIM lab to handle the task. AIM stands for Agriculture Instructional Media, and the staff, led by John Schmitz, searched for the most helpful message, meaning, and means to deliver the project. John and I created and developed the computer tutorial, and because both of us are enthused about {open_quotes}themore » Net{close_quotes} and being good citizens in the cyber community, we built ethics into our design from the earliest stages. You can examine our team`s product on the Web at/http://gopher.ag.uiuc.edu/WWW/AIM/Discovery/Net/intro.html. To give a brief overview of our message to the students, and the University community which will also share in our work, here is the wording from the opening Mosaic screen, a clear statement of the goals of our project: We believe that access to the Internet is a privilege. You need a basic knowledge of the {open_quotes}rules of the road{close_quotes} in order to be a good citizen of the Internet community. You should be required to earn a driver`s license to use the Information Superhighway.« less

  5. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  6. Research on OpenStack of open source cloud computing in colleges and universities’ computer room

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Zhang, Dandan

    2017-06-01

    In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.

  7. 9 CFR 93.412 - Ruminant quarantine facilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... facility. In the event of oral notification, APHIS will give written confirmation to the operator of the...) Windows and other openings. Any windows or other openings in the quarantine area must be double-screened...). All screening of windows or other openings must be easily removable for cleaning, yet otherwise remain...

  8. 9 CFR 93.412 - Ruminant quarantine facilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... facility. In the event of oral notification, APHIS will give written confirmation to the operator of the...) Windows and other openings. Any windows or other openings in the quarantine area must be double-screened...). All screening of windows or other openings must be easily removable for cleaning, yet otherwise remain...

  9. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  10. CSciBox: An Intelligent Assistant for Dating Ice and Sediment Cores

    NASA Astrophysics Data System (ADS)

    Finlinson, K.; Bradley, E.; White, J. W. C.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; Jones, T. R.; Lindsay, C. M.; Israelsen, B.

    2015-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmental archives. It incorporates a number of data-processing and visualization facilities, ranging from simple interpolation to reservoir-age correction and 14C calibration via the Calib algorithm, as well as a number of firn and ice-flow models. It employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form, and offers the user access to those data and computational elements via a modern graphical user interface (GUI). In the case of truly large data or computations, CSciBox is parallelizable across modern multi-core processors, or clusters, or even the cloud. The code is open source and freely available on github, as are one-click installers for various versions of Windows and Mac OSX. The system's architecture allows users to incorporate their own software in the form of computational components that can be built smoothly into CSciBox workflows, taking advantage of CSciBox's GUI, data importing facilities, and plotting capabilities. To date, BACON and StratiCounter have been integrated into CSciBox as embedded components. The user can manipulate and compose all of these tools and facilities as she sees fit. Alternatively, she can employ CSciBox's automated reasoning engine, which uses artificial intelligence techniques to explore the gamut of age models and cross-dating scenarios automatically. The automated reasoning engine captures the knowledge of expert geoscientists, and can output a description of its reasoning.

  11. 40 CFR 63.5799 - How do I calculate my facility's organic HAP emissions on a tpy basis for purposes of determining...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Calculating Organic Hap Emissions Factors for Open Molding and Centrifugal Casting § 63.5799 How do I.../casting operations, or a new facility that does not have any of the following operations: Open molding... coat is applied to an open centrifugal mold using open molding application techniques. Table 1 and the...

  12. 40 CFR 63.5799 - How do I calculate my facility's organic HAP emissions on a tpy basis for purposes of determining...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Calculating Organic Hap Emissions Factors for Open Molding and Centrifugal Casting § 63.5799 How do I.../casting operations, or a new facility that does not have any of the following operations: Open molding... coat is applied to an open centrifugal mold using open molding application techniques. Table 1 and the...

  13. Comparison of Mixing Characteristics for Several Fuel Injectors on an Open Plate and in a Ducted Flowpath Configuration at Hypervelocity Flow Conditions

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz G.; Shenoy, Rajiv R.; Passe, Bradley J.; Baurle, Robert A.; Drummond, J. Philip

    2017-01-01

    In order to reduce the cost and complexity associated with fuel injection and mixing experiments for high-speed flows, and to further enable optical access to the test section for nonintrusive diagnostics, the Enhanced Injection and Mixing Project (EIMP) utilizes an open flat plate configuration to characterize inert mixing properties of various fuel injectors for hypervelocity applications. The experiments also utilize reduced total temperature conditions to alleviate the need for hardware cooling. The use of "cold" flows and non-reacting mixtures for mixing experiments is not new, and has been extensively utilized as a screening technique for scramjet fuel injectors. The impact of reduced facility-air total temperature, and the use of inert fuel simulants, such as helium, on the mixing character of the flow has been assessed in previous numerical studies by the authors. Mixing performance was characterized for three different injectors: a strut, a ramp, and a flushwall. The present study focuses on the impact of using an open plate to approximate mixing in the duct. Toward this end, Reynolds-averaged simulations (RAS) were performed for the three fuel injectors in an open plate configuration and in a duct. The mixing parameters of interest, such as mixing efficiency and total pressure recovery, are then computed and compared for the two configurations. In addition to mixing efficiency and total pressure recovery, the combustion efficiency and thrust potential are also computed for the reacting simulations.

  14. Apollo experience report: Real-time auxiliary computing facility development

    NASA Technical Reports Server (NTRS)

    Allday, C. E.

    1972-01-01

    The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.

  15. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  16. Control and Information Systems for the National Ignition Facility

    DOE PAGES

    Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...

    2017-03-23

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  17. Control and Information Systems for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Gordon; Casey, Allan; Christensen, Marvin

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  18. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  19. Quantum-assisted biomolecular modelling.

    PubMed

    Harris, Sarah A; Kendon, Vivien M

    2010-08-13

    Our understanding of the physics of biological molecules, such as proteins and DNA, is limited because the approximations we usually apply to model inert materials are not, in general, applicable to soft, chemically inhomogeneous systems. The configurational complexity of biomolecules means the entropic contribution to the free energy is a significant factor in their behaviour, requiring detailed dynamical calculations to fully evaluate. Computer simulations capable of taking all interatomic interactions into account are therefore vital. However, even with the best current supercomputing facilities, we are unable to capture enough of the most interesting aspects of their behaviour to properly understand how they work. This limits our ability to design new molecules, to treat diseases, for example. Progress in biomolecular simulation depends crucially on increasing the computing power available. Faster classical computers are in the pipeline, but these provide only incremental improvements. Quantum computing offers the possibility of performing huge numbers of calculations in parallel, when it becomes available. We discuss the current open questions in biomolecular simulation, how these might be addressed using quantum computation and speculate on the future importance of quantum-assisted biomolecular modelling.

  20. Development and applications of nondestructive evaluation at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Whitaker, Ann F.

    1990-01-01

    A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.

  1. Process control charts in infection prevention: Make it simple to make it happen.

    PubMed

    Wiemken, Timothy L; Furmanek, Stephen P; Carrico, Ruth M; Mattingly, William A; Persaud, Annuradha K; Guinn, Brian E; Kelley, Robert R; Ramirez, Julio A

    2017-03-01

    Quality improvement is central to Infection Prevention and Control (IPC) programs. Challenges may occur when applying quality improvement methodologies like process control charts, often due to the limited exposure of typical IPs. Because of this, our team created an open-source database with a process control chart generator for IPC programs. The objectives of this report are to outline the development of the application and demonstrate application using simulated data. We used Research Electronic Data Capture (REDCap Consortium, Vanderbilt University, Nashville, TN), R (R Foundation for Statistical Computing, Vienna, Austria), and R Studio Shiny (R Foundation for Statistical Computing) to create an open source data collection system with automated process control chart generation. We used simulated data to test and visualize both in-control and out-of-control processes for commonly used metrics in IPC programs. The R code for implementing the control charts and Shiny application can be found on our Web site (https://github.com/ul-research-support/spcapp). Screen captures of the workflow and simulated data indicating both common cause and special cause variation are provided. Process control charts can be easily developed based on individual facility needs using freely available software. Through providing our work free to all interested parties, we hope that others will be able to harness the power and ease of use of the application for improving the quality of care and patient safety in their facilities. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  2. Open Space, Open Education, and Pupil Performance.

    ERIC Educational Resources Information Center

    Lukasevich, Ann; Gray, Roland F.

    1978-01-01

    Explores the relationship between instructional style (open and non-open programs), architectural style (open and non-open facilities) and selected cognitive and affective outcomes of third grade pupils. (CM)

  3. The Legnaro-Padova distributed Tier-2: challenges and results

    NASA Astrophysics Data System (ADS)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.

  4. West end view, "Boat Shop" open doorway, Facility 7 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    West end view, "Boat Shop" - open doorway, Facility 7 connected on the left, view facing east-southeast - U.S. Naval Base, Pearl Harbor, Boat Shop, Seventh Street near Avenue E, Pearl City, Honolulu County, HI

  5. FACILITY 89. INTERIOR OF LIVING ROOM, TAKEN THROUGH BEVELED OPENING. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FACILITY 89. INTERIOR OF LIVING ROOM, TAKEN THROUGH BEVELED OPENING. VIEW FACING NORTH. - U.S. Naval Base, Pearl Harbor, Naval Housing Area Makalapa, Junior Officers' Quarters Type K, Makin Place, & Halawa, Makalapa, & Midway Drives, Pearl City, Honolulu County, HI

  6. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  7. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  8. Academic Computing Facilities and Services in Higher Education--A Survey.

    ERIC Educational Resources Information Center

    Warlick, Charles H.

    1986-01-01

    Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…

  9. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  10. Open Access: "à consommer avec modération"

    NASA Astrophysics Data System (ADS)

    Mahoney, Terence J.

    There is increasing pressure on academics and researchers to publish the results of their investigations in open access journals. Indeed, some funding agencies make open access publishing a basic requirement for funding projects, and the EU is considering taking firm steps in this direction. I argue that astronomy is already one of the most open of disciplines, and that access - both to the general public (in terms of a significantly growing outreach effort) and to developing countries (through efforts to provide computing facilities and Internet access, as well as schemes to provide research centres of limited resources with journals) - is becoming more and more open in a genuine and lasting way. I further argue that sudden switches to more formal kinds of open access schemes could cause irreparable harm to astronomical publishing. Several of the most prestigious astronomical research journals (e.g. MN, ApJ, AJ) have for more than a century met the publishing needs of the research community and continue to adapt successfully to changing demands on the part of that community. The after-effects of abrupt changes in publishing practices - implemented through primarily political concerns - are hard to predict and could be severely damaging. I conclude that open access, in its current acceptation, should be studied with great care and with sufficient time before any consideration is given to its implementation. If forced on the publishing and research communities, open access could well result in much more limited access to properly vetted research results.

  11. 4 CFR 81.8 - Public reading facility.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....8 Public reading facility. GAO maintains a public reading facility in the Law Library at the Government Accountability Office Building, 441 G Street, NW., Washington, DC. The facility shall be open to...

  12. 4 CFR 81.8 - Public reading facility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....8 Public reading facility. GAO maintains a public reading facility in the Law Library at the Government Accountability Office Building, 441 G Street, NW., Washington, DC. The facility shall be open to...

  13. Open control/display system for a telerobotics work station

    NASA Technical Reports Server (NTRS)

    Keslowitz, Saul

    1987-01-01

    A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.

  14. Shell stability analysis in a computer aided engineering (CAE) environment

    NASA Technical Reports Server (NTRS)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  15. The HEP Software and Computing Knowledge Base

    NASA Astrophysics Data System (ADS)

    Wenaus, T.

    2017-10-01

    HEP software today is a rich and diverse domain in itself and exists within the mushrooming world of open source software. As HEP software developers and users we can be more productive and effective if our work and our choices are informed by a good knowledge of what others in our community have created or found useful. The HEP Software and Computing Knowledge Base, hepsoftware.org, was created to facilitate this by serving as a collection point and information exchange on software projects and products, services, training, computing facilities, and relating them to the projects, experiments, organizations and science domains that offer them or use them. It was created as a contribution to the HEP Software Foundation, for which a HEP S&C knowledge base was a much requested early deliverable. This contribution will motivate and describe the system, what it offers, its content and contributions both existing and needed, and its implementation (node.js based web service and javascript client app) which has emphasized ease of use for both users and contributors.

  16. Open-Path Hydrocarbon Laser Sensor for Oil and Gas Facility Monitoring

    EPA Science Inventory

    This poster reports on an experimental prototype open-path laser absorption sensor for measurement of unspeciated hydrocarbons for oil and gas production facility fence-line monitoring. Such measurements may be useful to meet certain state regulations, and enable advanced leak d...

  17. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  18. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  19. Data Recording Room in the 10-by 10-Foot Supersonic Wind Tunnel

    NASA Image and Video Library

    1973-04-21

    The test data recording equipment located in the office building of the 10-by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center. The data system was the state of the art when the facility began operating in 1955 and was upgraded over time. NASA engineers used solenoid valves to measure pressures from different locations within the test section. Up 48 measurements could be fed into a single transducer. The 10-by 10 data recorders could handle up to 200 data channels at once. The Central Automatic Digital Data Encoder (CADDE) converted this direct current raw data from the test section into digital format on magnetic tape. The digital information was sent to the Lewis Central Computer Facility for additional processing. It could also be displayed in the control room via strip charts or oscillographs. The 16-by 56-foot long ERA 1103 UNIVAC mainframe computer processed most of the digital data. The paper tape with the raw data was fed into the ERA 1103 which performed the needed calculations. The information was then sent back to the control room. There was a lag of several minutes before the computed information was available, but it was exponentially faster than the hand calculations performed by the female computers. The 10- by 10-foot tunnel, which had its official opening in May 1956, was built under the Congressional Unitary Plan Act which coordinated wind tunnel construction at the NACA, Air Force, industry, and universities. The 10- by 10 was the largest of the three NACA tunnels built under the act.

  20. Open-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Koppen, Daniel M.

    1997-01-01

    During the third quarter of 1996, the Closed-Loop Systems Laboratory was established at the NASA Langley Research Center (LaRC) to study the effects of High Intensity Radiated Fields on complex avionic systems and control system components. This new facility provided a link and expanded upon the existing capabilities of the High Intensity Radiated Fields Laboratory at LaRC that were constructed and certified during 1995-96. The scope of the Closed-Loop Systems Laboratory is to place highly integrated avionics instrumentation into a high intensity radiated field environment, interface the avionics to a real-time flight simulation that incorporates aircraft dynamics, engines, sensors, actuators and atmospheric turbulence, and collect, analyze, and model aircraft performance. This paper describes the layout and functionality of the Closed-Loop Systems Laboratory, and the open-loop calibration experiments that led up to the commencement of closed-loop real-time flight experiments.

  1. The hills are alive: Earth surface dynamics in the University of Arizona Landscape Evolution Observatory

    NASA Astrophysics Data System (ADS)

    DeLong, S.; Troch, P. A.; Barron-Gafford, G. A.; Huxman, T. E.; Pelletier, J. D.; Dontsova, K.; Niu, G.; Chorover, J.; Zeng, X.

    2012-12-01

    To meet the challenge of predicting landscape-scale changes in Earth system behavior, the University of Arizona has designed and constructed a new large-scale and community-oriented scientific facility - the Landscape Evolution Observatory (LEO). The primary scientific objectives are to quantify interactions among hydrologic partitioning, geochemical weathering, ecology, microbiology, atmospheric processes, and geomorphic change associated with incipient hillslope development. LEO consists of three identical, sloping, 333 m2 convergent landscapes inside a 5,000 m2 environmentally controlled facility. These engineered landscapes contain 1 meter of basaltic tephra ground to homogenous loamy sand and contains a spatially dense sensor and sampler network capable of resolving meter-scale lateral heterogeneity and sub-meter scale vertical heterogeneity in moisture, energy and carbon states and fluxes. Each ~1000 metric ton landscape has load cells embedded into the structure to measure changes in total system mass with 0.05% full-scale repeatability (equivalent to less than 1 cm of precipitation), to facilitate better quantification of evapotraspiration. Each landscape has an engineered rain system that allows application of precipitation at rates between3 and 45 mm/hr. These landscapes are being studied in replicate as "bare soil" for an initial period of several years. After this initial phase, heat- and drought-tolerant vascular plant communities will be introduced. Introduction of vascular plants is expected to change how water, carbon, and energy cycle through the landscapes, with potentially dramatic effects on co-evolution of the physical and biological systems. LEO also provides a physical comparison to computer models that are designed to predict interactions among hydrological, geochemical, atmospheric, ecological and geomorphic processes in changing climates. These computer models will be improved by comparing their predictions to physical measurements made in LEO. The main focus of our iterative modeling and measurement discovery cycle is to use rapid data assimilation to facilitate validation of newly coupled open-source Earth systems models. LEO will be a community resource for Earth system science research, education, and outreach. The LEO project operational philosophy includes 1) open and real-time availability of sensor network data, 2) a framework for community collaboration and facility access that includes integration of new or comparative measurement capabilities into existing facility cyberinfrastructure, 3) community-guided science planning and 4) development of novel education and outreach programs.Artistic rendering of the University of Arizona Landscape Evolution Observatory

  2. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    NASA Astrophysics Data System (ADS)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  3. MULTI-POLLUTANT CONCENTRATION MEASUREMENTS AROUND A CONCENTRATED SWINE PRODUCTION FACILITY USING OPEN-PATH FTIR SPECTROMETRY

    EPA Science Inventory

    Open-path Fourier transform infrared (OP/FTIR) spectrometry was used to measure the concentrations of ammonia, methane, and other atmospheric gasses around an integrated industrial swine production facility in eastern North Carolina. Several single-path measurements were made ove...

  4. Comparison of DOE and NIRMA approaches to configuration management programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, E.Y.; Kulzick, K.C.

    One of the major management programs used for commercial, laboratory, and defense nuclear facilities is configuration management. The safe and efficient operation of a nuclear facility requires constant vigilance in maintaining the facility`s design basis with its as-built condition. Numerous events have occurred that can be attributed to (either directly or indirectly) the extent to which configuration management principles have been applied. The nuclear industry, as a whole, has been addressing this management philosophy with efforts taken on by its constituent professional organizations. The purpose of this paper is to compare and contrast the implementation plans for enhancing a configurationmore » management program as outlined in the U.S. Department of Energy`s (DOE`s) DOE-STD-1073-93, {open_quotes}Guide for Operational Configuration Management Program,{close_quotes} with the following guidelines developed by the Nuclear Information and Records Management Association (NIRMA): 1. PP02-1994, {open_quotes}Position Paper on Configuration Management{close_quotes} 2. PP03-1992, {open_quotes}Position Paper for Implementing a Configuration Management Enhancement Program for a Nuclear Facility{close_quotes} 3. PP04-1994 {open_quotes}Position Paper for Configuration Management Information Systems.{close_quotes}« less

  5. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  6. Flying a College on the Computer. The Use of the Computer in Planning Buildings.

    ERIC Educational Resources Information Center

    Saint Louis Community Coll., MO.

    Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…

  7. Translations on USSR Science and Technology, Biomedical and Behavioral Sciences, Number 46

    DTIC Science & Technology

    1978-09-25

    AND BEHAVIORAL SCIENCES No. 46 CONTENTS PAGE AGROTECHNOLOGY Open Lot Facility for Cattle Fattening (M.G. Karpov, et al.; ZHIVOTNOVODSTVO, No 6...636.22/.28.OQk.522 OPEN LOT FACILITY FOR CATTLE FATTENING Moscow ZHIVOTNOVODSTVO in Russian No 6, 1978 pp 55-59 [Article by Moskalevskiy Sovkhoz...Institute of Livestock Raising; and Moskalevskiy Sovkhoz Chief Zootechnician Z. A. Zhanburshinov: "Experience of Fattening Cattle on Open Lot on

  8. Opportunities for Open Automated Demand Response in Wastewater Treatment Facilities in California - Phase II Report. San Luis Rey Wastewater Treatment Plant Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lisa; Lekov, Alex; McKane, Aimee

    2010-08-20

    This case study enhances the understanding of open automated demand response opportunities in municipal wastewater treatment facilities. The report summarizes the findings of a 100 day submetering project at the San Luis Rey Wastewater Treatment Plant, a municipal wastewater treatment facility in Oceanside, California. The report reveals that key energy-intensive equipment such as pumps and centrifuges can be targeted for large load reductions. Demand response tests on the effluent pumps resulted a 300 kW load reduction and tests on centrifuges resulted in a 40 kW load reduction. Although tests on the facility?s blowers resulted in peak period load reductions ofmore » 78 kW sharp, short-lived increases in the turbidity of the wastewater effluent were experienced within 24 hours of the test. The results of these tests, which were conducted on blowers without variable speed drive capability, would not be acceptable and warrant further study. This study finds that wastewater treatment facilities have significant open automated demand response potential. However, limiting factors to implementing demand response are the reaction of effluent turbidity to reduced aeration load, along with the cogeneration capabilities of municipal facilities, including existing power purchase agreements and utility receptiveness to purchasing electricity from cogeneration facilities.« less

  9. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  10. Advancing Capabilities for Understanding the Earth System Through Intelligent Systems, the NSF Perspective

    NASA Astrophysics Data System (ADS)

    Gil, Y.; Zanzerkia, E. E.; Munoz-Avila, H.

    2015-12-01

    The National Science Foundation (NSF) Directorate for Geosciences (GEO) and Directorate for Computer and Information Science (CISE) acknowledge the significant scientific challenges required to understand the fundamental processes of the Earth system, within the atmospheric and geospace, Earth, ocean and polar sciences, and across those boundaries. A broad view of the opportunities and directions for GEO are described in the report "Dynamic Earth: GEO imperative and Frontiers 2015-2020." Many of the aspects of geosciences research, highlighted both in this document and other community grand challenges, pose novel problems for researchers in intelligent systems. Geosciences research will require solutions for data-intensive science, advanced computational capabilities, and transformative concepts for visualizing, using, analyzing and understanding geo phenomena and data. Opportunities for the scientific community to engage in addressing these challenges are available and being developed through NSF's portfolio of investments and activities. The NSF-wide initiative, Cyberinfrastructure Framework for 21st Century Science and Engineering (CIF21), looks to accelerate research and education through new capabilities in data, computation, software and other aspects of cyberinfrastructure. EarthCube, a joint program between GEO and the Advanced Cyberinfrastructure Division, aims to create a well-connected and facile environment to share data and knowledge in an open, transparent, and inclusive manner, thus accelerating our ability to understand and predict the Earth system. EarthCube's mission opens an opportunity for collaborative research on novel information systems enhancing and supporting geosciences research efforts. NSF encourages true, collaborative partnerships between scientists in computer sciences and the geosciences to meet these challenges.

  11. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  12. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  13. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  14. 77 FR 24646 - Open Access and Priority Rights on Interconnection Facilities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-25

    ... multiple generation facilities to transmit power from the generation facility to the integrated... power flows toward the network grid, with no electrical loads between the generation facilities and the... generator expansion plans with milestones for construction of generation facilities and can demonstrate that...

  15. Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-05-01

    The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.

  16. Putting tools in the toolbox: Development of a free, open-source toolbox for quantitative image analysis of porous media.

    NASA Astrophysics Data System (ADS)

    Iltis, G.; Caswell, T. A.; Dill, E.; Wilkins, S.; Lee, W. K.

    2014-12-01

    X-ray tomographic imaging of porous media has proven to be a valuable tool for investigating and characterizing the physical structure and state of both natural and synthetic porous materials, including glass bead packs, ceramics, soil and rock. Given that most synchrotron facilities have user programs which grant academic researchers access to facilities and x-ray imaging equipment free of charge, a key limitation or hindrance for small research groups interested in conducting x-ray imaging experiments is the financial cost associated with post-experiment data analysis. While the cost of high performance computing hardware continues to decrease, expenses associated with licensing commercial software packages for quantitative image analysis continue to increase, with current prices being as high as $24,000 USD, for a single user license. As construction of the Nation's newest synchrotron accelerator nears completion, a significant effort is being made here at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory (BNL), to provide an open-source, experiment-to-publication toolbox that reduces the financial and technical 'activation energy' required for performing sophisticated quantitative analysis of multidimensional porous media data sets, collected using cutting-edge x-ray imaging techniques. Implementation focuses on leveraging existing open-source projects and developing additional tools for quantitative analysis. We will present an overview of the software suite that is in development here at BNL including major design decisions, a demonstration of several test cases illustrating currently available quantitative tools for analysis and characterization of multidimensional porous media image data sets and plans for their future development.

  17. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  18. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    NASA Astrophysics Data System (ADS)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  19. 75 FR 55297 - Further Inquiry Into Two Under-Developed Issues in the Open Internet Proceeding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-10

    ... facilities as broadband Internet access service (commonly called ``managed'' or ``specialized'' services). The second is the application of open Internet rules to mobile wireless Internet access services... Framework for Broadband Access to the Internet Over Wireline Facilities et al., CC Docket Nos. 02-33, 01-337...

  20. Cost Implications of an Interim Storage Facility in the Waste Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J.; Joseph, III, Robert Anthony; Howard, Rob L

    2016-09-01

    This report provides an evaluation of the cost implications of incorporating a consolidated interim storage facility (ISF) into the waste management system (WMS). Specifically, the impacts of the timing of opening an ISF relative to opening a repository were analyzed to understand the potential effects on total system costs.

  1. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  2. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    PubMed

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  3. ASCR Cybersecurity for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piesert, Sean

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE tomore » execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.« less

  4. Wedge Shock and Nozzle Exhaust Plume Interaction in a Supersonic Jet Flow

    NASA Technical Reports Server (NTRS)

    Castner, Raymond; Zaman, Khairul; Fagan, Amy; Heath, Christopher

    2014-01-01

    Fundamental research for sonic boom reduction is needed to quantify the interaction of shock waves generated from the aircraft wing or tail surfaces with the nozzle exhaust plume. Aft body shock waves that interact with the exhaust plume contribute to the near-field pressure signature of a vehicle. The plume and shock interaction was studied using computational fluid dynamics and compared with experimental data from a coaxial convergent-divergent nozzle flow in an open jet facility. A simple diamond-shaped wedge was used to generate the shock in the outer flow to study its impact on the inner jet flow. Results show that the compression from the wedge deflects the nozzle plume and shocks form on the opposite plume boundary. The sonic boom pressure signature of the nozzle exhaust plume was modified by the presence of the wedge. Both the experimental results and computational predictions show changes in plume deflection.

  5. Urban Watershed Research Facility at Edison Environmental Center

    EPA Science Inventory

    The Urban Watershed Research Facility (UWRF) is an isolated, 20-acre open space within EPA’s 200 acre Edison facility established to develop and evaluate the performance of stormwater management practices under controlled conditions. The facility includes greenhouses that allow ...

  6. KSC-2013-2973

    NASA Image and Video Library

    2013-06-28

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Mike Konzen of PGAV Destinations speaks to news media representatives during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. PGAV was responsible for the "Space Shuttle Atlantis" facility design and architecture. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  7. Precision Agriculture Design Method Using a Distributed Computing Architecture on Internet of Things Context.

    PubMed

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Martínez, José

    2018-05-28

    The Internet of Things (IoT) has opened productive ways to cultivate soil with the use of low-cost hardware (sensors/actuators) and communication (Internet) technologies. Remote equipment and crop monitoring, predictive analytic, weather forecasting for crops or smart logistics and warehousing are some examples of these new opportunities. Nevertheless, farmers are agriculture experts but, usually, do not have experience in IoT applications. Users who use IoT applications must participate in its design, improving the integration and use. In this work, different industrial agricultural facilities are analysed with farmers and growers to design new functionalities based on IoT paradigms deployment. User-centred design model is used to obtain knowledge and experience in the process of introducing technology in agricultural applications. Internet of things paradigms are used as resources to facilitate the decision making. IoT architecture, operating rules and smart processes are implemented using a distributed model based on edge and fog computing paradigms. A communication architecture is proposed using these technologies. The aim is to help farmers to develop smart systems both, in current and new facilities. Different decision trees to automate the installation, designed by the farmer, can be easily deployed using the method proposed in this document.

  8. Public Outreach at RAL: Engaging the Next Generation of Scientists and Engineers

    NASA Astrophysics Data System (ADS)

    Corbett, G.; Ryall, G.; Palmer, S.; Collier, I. P.; Adams, J.; Appleyard, R.

    2015-12-01

    The Rutherford Appleton Laboratory (RAL) is part of the UK's Science and Technology Facilities Council (STFC). As part of the Royal Charter that established the STFC, the organisation is required to generate public awareness and encourage public engagement and dialogue in relation to the science undertaken. The staff at RAL firmly support this activity as it is important to encourage the next generation of students to consider studying Science, Technology, Engineering, and Mathematics (STEM) subjects, providing the UK with a highly skilled work-force in the future. To this end, the STFC undertakes a variety of outreach activities. This paper will describe the outreach activities undertaken by RAL, particularly focussing on those of the Scientific Computing Department (SCD). These activities include: an Arduino based activity day for 12-14 year-olds to celebrate Ada Lovelace day; running a centre as part of the Young Rewired State - encouraging 11-18 year-olds to create web applications with open data; sponsoring a team in the Engineering Education Scheme - supporting a small team of 16-17 year-olds to solve a real world engineering problem; as well as the more traditional tours of facilities. These activities could serve as an example for other sites involved in scientific computing around the globe.

  9. Instrument Systems Analysis and Verification Facility (ISAVF) users guide

    NASA Technical Reports Server (NTRS)

    Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.

    1985-01-01

    The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.

  10. Low-level radwaste storage facility at Hope Creek and Salem Generating Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyen, L.C.; Lee, K.; Bravo, R.

    Following the January 1, 1993, closure of the radwaste disposal facilities at Beatty, Nevada, and Richland, Washington (to waste generators outside the compact), only Barnwell, South Carolina, is open to waste generators in most states. Barnwell is scheduled to stay open to waste generators outside the Southeast Compact until June 30, 1994. Continued delays in opening regional radwaste disposal facilities have forced most nuclear utilities to consider on-site storage of low-level radwaste. Public Service Electric and Gas Company (PSE G) considered several different radwaste storage options before selecting the design based on the steel-frame and metal-siding building design described inmore » the Electric Power Research Institute's (EPRI's) TR-100298 Vol. 2, Project 3800 report. The storage facility will accommodate waste generated by Salem units 1 and 2 and Hope Creek unit 1 for a 5-yr period and will be located within their common protected area.« less

  11. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  12. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  13. Experience with a UNIX based batch computing facility for H1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.

    1994-12-31

    A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.

  14. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...

  15. Opportunities for Energy Efficiency and Open Automated Demand Response in Wastewater Treatment Facilities in California -- Phase I Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekov, Alex; Thompson, Lisa; McKane, Aimee

    This report summarizes the Lawrence Berkeley National Laboratory?s research to date in characterizing energy efficiency and automated demand response opportunities for wastewater treatment facilities in California. The report describes the characteristics of wastewater treatment facilities, the nature of the wastewater stream, energy use and demand, as well as details of the wastewater treatment process. It also discusses control systems and energy efficiency and automated demand response opportunities. In addition, several energy efficiency and load management case studies are provided for wastewater treatment facilities.This study shows that wastewater treatment facilities can be excellent candidates for open automated demand response and thatmore » facilities which have implemented energy efficiency measures and have centralized control systems are well-suited to shift or shed electrical loads in response to financial incentives, utility bill savings, and/or opportunities to enhance reliability of service. Control technologies installed for energy efficiency and load management purposes can often be adapted for automated demand response at little additional cost. These improved controls may prepare facilities to be more receptive to open automated demand response due to both increased confidence in the opportunities for controlling energy cost/use and access to the real-time data.« less

  16. Optimising the Parallelisation of OpenFOAM Simulations

    DTIC Science & Technology

    2014-06-01

    UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI

  17. First-principles characterization of formate and carboxyl adsorption on the stoichiometric CeO2(111) and CeO2(110) surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai

    2013-05-20

    Molecular adsorption of formate and carboxyl on the stoichiometric CeO2(111) and CeO2(110) surfaces was studied using periodic density functional theory (DFT+U) calculations. Two distinguishable adsorption modes (strong and weak) of formate are identified. The bidentate configuration is more stable than the monodentate adsorption configuration. Both formate and carboxyl bind at the more open CeO2(110) surface are stronger. The calculated vibrational frequencies of two adsorbed species are consistent with experimental measurements. Finally, the effects of U parameters on the adsorption of formate and carboxyl over both CeO2 surfaces were investigated. We found that the geometrical configurations of two adsorbed species aremore » not affected by using different U parameters (U=0, 5, and 7). However, the calculated adsorption energy of carboxyl pronouncedly increases with the U value while the adsorption energy of formate only slightly changes (<0.2 eV). The Bader charge analysis shows the opposite charge transfer occurs for formate and carboxyl adsorption where the adsorbed formate is negatively charge whiled the adsorbed carboxyl is positively charged. Interestingly, with the increasing U parameter, the amount of charge is also increased. This work was supported by the Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL) and by a Cooperative Research and Development Agreement (CRADA) with General Motors. The computations were performed using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington. Part of the computing time was also granted by the National Energy Research Scientific Computing Center (NERSC)« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, Birchard P; Michel, Kelly D; Few, Douglas A

    From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less

  19. COMPARISON OF AN INNOVATIVE NONLINEAR ALGORITHM TO CLASSICAL LEAST SQUARES FOR ANALYZING OPEN-PATH FOURIER TRANSFORM INFRARED SPECTRA COLLECTED AT A CONCENTRATED SWINE PRODUCTION FACILITY

    EPA Science Inventory

    Open-path Fourier transform infrared (OP/FTIR) spectrometry was used to measure the concentrations of ammonia, methane, and other atmospheric gases at an integrated swine production facility. The concentration-pathlength products of the target gases at this site often exceeded th...

  20. APPLICATION OF STANDARDIZED QUALITY CONTROL PROCEDURES TO OPEN-PATH FOURIER TRANSFORM INFRARED DATA COLLECTED AT A CONCENTRATED SWINE PRODUCTION FACILITY

    EPA Science Inventory

    Open-path Fourier transform infrared (OP/FT-IR) spectrometry was used to measure the concentrations of ammonia, methane, and other atmospheric eases at a concentrated swine production facility. A total of 2200 OP/FT-IR spectra were acquired along nine different monitoring paths d...

  1. Vision 2010: The Future of Higher Education Business and Learning Applications

    ERIC Educational Resources Information Center

    Carey, Patrick; Gleason, Bernard

    2006-01-01

    The global software industry is in the midst of a major evolutionary shift--one based on open computing--and this trend, like many transformative trends in technology, is being led by the IT staffs and academic computing faculty of the higher education industry. The elements of this open computing approach are open source, open standards, open…

  2. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.

  3. Operational Experience of an Open-Access, Subscription-Based Mass Spectrometry and Proteomics Facility.

    PubMed

    Williamson, Nicholas A

    2018-03-01

    This paper discusses the successful adoption of a subscription-based, open-access model of service delivery for a mass spectrometry and proteomics facility. In 2009, the Mass Spectrometry and Proteomics Facility at the University of Melbourne (Australia) moved away from the standard fee for service model of service provision. Instead, the facility adopted a subscription- or membership-based, open-access model of service delivery. For a low fixed yearly cost, users could directly operate the instrumentation but, more importantly, there were no limits on usage other than the necessity to share available instrument time with all other users. All necessary training from platform staff and many of the base reagents were also provided as part of the membership cost. These changes proved to be very successful in terms of financial outcomes for the facility, instrument access and usage, and overall research output. This article describes the systems put in place as well as the overall successes and challenges associated with the operation of a mass spectrometry/proteomics core in this manner. Graphical abstract ᅟ.

  4. Operational Experience of an Open-Access, Subscription-Based Mass Spectrometry and Proteomics Facility

    NASA Astrophysics Data System (ADS)

    Williamson, Nicholas A.

    2018-03-01

    This paper discusses the successful adoption of a subscription-based, open-access model of service delivery for a mass spectrometry and proteomics facility. In 2009, the Mass Spectrometry and Proteomics Facility at the University of Melbourne (Australia) moved away from the standard fee for service model of service provision. Instead, the facility adopted a subscription- or membership-based, open-access model of service delivery. For a low fixed yearly cost, users could directly operate the instrumentation but, more importantly, there were no limits on usage other than the necessity to share available instrument time with all other users. All necessary training from platform staff and many of the base reagents were also provided as part of the membership cost. These changes proved to be very successful in terms of financial outcomes for the facility, instrument access and usage, and overall research output. This article describes the systems put in place as well as the overall successes and challenges associated with the operation of a mass spectrometry/proteomics core in this manner. [Figure not available: see fulltext.

  5. Future Computer Requirements for Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  6. 40 CFR 265.382 - Open burning; waste explosives.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Open burning; waste explosives. 265... DISPOSAL FACILITIES Thermal Treatment § 265.382 Open burning; waste explosives. Open burning of hazardous waste is prohibited except for the open burning and detonation of waste explosives. Waste explosives...

  7. 40 CFR 265.382 - Open burning; waste explosives.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Open burning; waste explosives. 265... DISPOSAL FACILITIES Thermal Treatment § 265.382 Open burning; waste explosives. Open burning of hazardous waste is prohibited except for the open burning and detonation of waste explosives. Waste explosives...

  8. 40 CFR 265.382 - Open burning; waste explosives.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Open burning; waste explosives. 265... DISPOSAL FACILITIES Thermal Treatment § 265.382 Open burning; waste explosives. Open burning of hazardous waste is prohibited except for the open burning and detonation of waste explosives. Waste explosives...

  9. 40 CFR 265.382 - Open burning; waste explosives.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Open burning; waste explosives. 265... DISPOSAL FACILITIES Thermal Treatment § 265.382 Open burning; waste explosives. Open burning of hazardous waste is prohibited except for the open burning and detonation of waste explosives. Waste explosives...

  10. 40 CFR 265.382 - Open burning; waste explosives.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Open burning; waste explosives. 265... DISPOSAL FACILITIES Thermal Treatment § 265.382 Open burning; waste explosives. Open burning of hazardous waste is prohibited except for the open burning and detonation of waste explosives. Waste explosives...

  11. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  12. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  13. [Feather--data acquisition in gynaecology and obstetrics].

    PubMed

    Oppelt, P; Plathow, D; Oppelt, A; Stähler, J; Petrich, S; Scharl, A; Costa, S; Jesgarz, J; Kaufmann, M; Bergh, B

    2002-07-01

    Nowadays many types of medical documentation are based on computer facilities. Unfortunately, this involves the considerable disadvantage that almost every single department and specialty has its own software programs, with the physician having to learn a whole range of different programs. In addition, data sometimes have to be entered twice - since although open interfaces are often available, the elaborate programming required to transfer data from outside programs makes the financial costs too high. Since 1995 the University's of Frankfurt am Main Department of Gynecology and Obstetrics has therefore developed a consistent program of its own under Windows NT for in-patient facilities, as well as for some outpatient services. The program does not aim to achieve everything that is technically possible, but focuses primarily on user requirements. In addition to the general requirements for medical documentation in gynecology and obstetrics, the program can also handle perinatal inquiries and gynecological quality control (QSmed [Qualitätssicherung in der Medizin] of the BQS [Bundesgeschäftsstelle Qualitätssicherung]).

  14. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  15. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    NASA Astrophysics Data System (ADS)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  16. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less

  17. Facilities Management via Computer: Information at Your Fingertips.

    ERIC Educational Resources Information Center

    Hensey, Susan

    1996-01-01

    Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)

  18. Pedestrian simulation and distribution in urban space based on visibility analysis and agent simulation

    NASA Astrophysics Data System (ADS)

    Ying, Shen; Li, Lin; Gao, Yurong

    2009-10-01

    Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.

  19. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  20. Sandia National Laboratories: Livermore Valley Open Campus (LVOC)

    Science.gov Websites

    Visiting the LVOC Locations Livermore Valley Open Campus (LVOC) Open engagement Expanding opportunities for open engagement of the broader scientific community. Building on success Sandia's Combustion Research Facility pioneered open collaboration over 30 years ago. Access to DOE-funded capabilities Expanding access

  1. The scaling issue: scientific opportunities

    NASA Astrophysics Data System (ADS)

    Orbach, Raymond L.

    2009-07-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  2. AstrodyToolsWeb an e-Science project in Astrodynamics and Celestial Mechanics fields

    NASA Astrophysics Data System (ADS)

    López, R.; San-Juan, J. F.

    2013-05-01

    Astrodynamics Web Tools, AstrodyToolsWeb (http://tastrody.unirioja.es), is an ongoing collaborative Web Tools computing infrastructure project which has been specially designed to support scientific computation. AstrodyToolsWeb provides project collaborators with all the technical and human facilities in order to wrap, manage, and use specialized noncommercial software tools in Astrodynamics and Celestial Mechanics fields, with the aim of optimizing the use of resources, both human and material. However, this project is open to collaboration from the whole scientific community in order to create a library of useful tools and their corresponding theoretical backgrounds. AstrodyToolsWeb offers a user-friendly web interface in order to choose applications, introduce data, and select appropriate constraints in an intuitive and easy way for the user. After that, the application is executed in real time, whenever possible; then the critical information about program behavior (errors and logs) and output, including the postprocessing and interpretation of its results (graphical representation of data, statistical analysis or whatever manipulation therein), are shown via the same web interface or can be downloaded to the user's computer.

  3. SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows.

    PubMed

    Brun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, Alessia

    2017-01-01

    When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user's home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.

  4. 2014 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  5. 2015 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  6. The University of Tokyo Atacama Observatory 6.5m Telescope: enclosure design and wind analysis

    NASA Astrophysics Data System (ADS)

    Konishi, Masahiro; Sako, Shigeyuki; Uchida, Takanori; Araya, Ryou; Kim, Koui; Yoshii, Yuzuru; Doi, Mamoru; Kohno, Kotaro; Miyata, Takashi; Motohara, Kentaro; Tanaka, Masuo; Minezaki, Takeo; Morokuma, Tomoki; Tamura, Yoichi; Tanabé, Toshihiko; Kato, Natsuko; Kamizuka, Takafumi; Takahashi, Hidenori; Aoki, Tsutomu; Soyano, Takao; Tarusawa, Ken'ichi

    2016-07-01

    We present results on the computational fluid dynamics (CFD) numerical simulations as well as the wind tunnel experiments for the observation facilities of the University of Tokyo Atacama Observatory 6.5m Telescope being constructed at the summit of Co. Chajnantor in northern Chile. Main purpose of this study starting with the baseline design reported in 2014 is to analyze topographic effect on the wind behavior, and to evaluate the wind pressure, the air turbulence, and the air change (ventilation) efficiency in the enclosure. The wind velocity is found to be accelerated by a factor of 1.2 to reach the summit (78 m sec-1 expected at a maximum), and the resulting wind pressure (3,750 N m-2) is used for the framework design of the facilities. The CFD data reveals that the open space below the floor of the facilities works efficiently to drift away the air turbulence near the ground level which could significantly affect the dome seeing. From comparisons of the wind velocity field obtained from the CFD simulation for three configurations of the ventilation windows, we find that the windows at a level of the telescope secondary mirror have less efficiency of the air change than those at lower levels. Considering the construction and maintenance costs, and operation procedures, we finally decide to allocate 13 windows at a level of the observing floor, 12 at a level of the primary mirror, and 2 at the level of the secondary mirror. The opening area by those windows accounts for about 14% of the total interior surface of the enclosure. Typical air change rate of 20-30 per hour is expected at the wind velocity of 1 m sec-1.

  7. 40 CFR 63.5796 - What are the organic HAP emissions factor equations in Table 1 to this subpart, and how are they...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Emissions Factors for Open Molding and Centrifugal Casting § 63.5796 What are the organic HAP emissions... factors. Equations are available for each open molding operation and centrifugal casting operation and... incorporated in the facility's air emissions permit and are based on actual facility HAP emissions test data...

  8. 40 CFR 63.5796 - What are the organic HAP emissions factor equations in Table 1 to this subpart, and how are they...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Emissions Factors for Open Molding and Centrifugal Casting § 63.5796 What are the organic HAP emissions... factors. Equations are available for each open molding operation and centrifugal casting operation and... incorporated in the facility's air emissions permit and are based on actual facility HAP emissions test data...

  9. 40 CFR 63.5796 - What are the organic HAP emissions factor equations in Table 1 to this subpart, and how are they...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Organic Hap Emissions Factors for Open Molding and Centrifugal Casting § 63.5796 What are the organic HAP... emissions factors. Equations are available for each open molding operation and centrifugal casting operation... incorporated in the facility's air emissions permit and are based on actual facility HAP emissions test data...

  10. 40 CFR 63.5796 - What are the organic HAP emissions factor equations in Table 1 to this subpart, and how are they...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Organic Hap Emissions Factors for Open Molding and Centrifugal Casting § 63.5796 What are the organic HAP... emissions factors. Equations are available for each open molding operation and centrifugal casting operation... incorporated in the facility's air emissions permit and are based on actual facility HAP emissions test data...

  11. 40 CFR 63.5796 - What are the organic HAP emissions factor equations in Table 1 to this subpart, and how are they...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Organic Hap Emissions Factors for Open Molding and Centrifugal Casting § 63.5796 What are the organic HAP... emissions factors. Equations are available for each open molding operation and centrifugal casting operation... incorporated in the facility's air emissions permit and are based on actual facility HAP emissions test data...

  12. The Open University System of Brazil: A Study of Learner Support Facilities in the Northern, North-Eastern and Southern Regions

    ERIC Educational Resources Information Center

    Da Cruz Duran, Maria Renata; Da Costa, Celso José; Amiel, Tel

    2014-01-01

    Since June 2011, research on the Open University System of Brazil's (UAB's) official evaluation processes relating to learner support facilities has been carried out by the Teachers' Training, New Information, Communication and Technologies research group, which is linked to the Laboratory of New Technologies for Teaching at Fluminense Federal…

  13. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  14. OpenTopography

    NASA Astrophysics Data System (ADS)

    Baru, C.; Arrowsmith, R.; Crosby, C.; Nandigam, V.; Phan, M.; Cowart, C.

    2012-04-01

    OpenTopography is a cyberinfrastructure-based facility for online access to high-resolution topography and tools. The project is an outcome of the Geosciences Network (GEON) project, which was a research project funded several years ago in the US to investigate the use of cyberinfrastructure to support research and education in the geosciences. OpenTopography provides online access to large LiDAR point cloud datasets along with services for processing these data. Users are able to generate custom DEMs by invoking DEM services provided by OpenTopography with custom parameter values. Users can track the progress of their jobs, and a private myOpenTopo area retains job information and job outputs. Data available at OpenTopography are provided by a variety of data acquisition groups under joint agreements and memoranda of understanding (MoU). These include national facilities such as the National Center for Airborne Lidar Mapping, as well as local, state, and federal agencies. OpenTopography is also being designed as a hub for high-resolution topography resources. Datasets and services available at other locations can also be registered here, providing a "one-stop shop" for such information. We will describe the OpenTopography system architecture and its current set of features, including the service-oriented architecture, a job-tracking database, and social networking features. We will also describe several design and development activities underway to archive and publish datasets using digital object identifiers (DOIs); create a more flexible and scalable high-performance environment for processing of large datasets; extend support for satellite-based and terrestrial lidar as well as synthetic aperture radar (SAR) data; and create a "pluggable" infrastructure for third-party services. OpenTopography has successfully created a facility for sharing lidar data. In the next phase, we are developing a facility that will also enable equally easy and successful sharing of services related to these data.

  15. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  16. Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping

    DTIC Science & Technology

    2016-03-01

    Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing

  17. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  18. batman: BAsic Transit Model cAlculatioN in Python

    NASA Astrophysics Data System (ADS)

    Kreidberg, Laura

    2015-11-01

    I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .

  19. Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel

    2011-01-01

    The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less

  20. Diagonal dominance for the multivariable Nyquist array using function minimization

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1977-01-01

    A new technique for the design of multivariable control systems using the multivariable Nyquist array method was developed. A conjugate direction function minimization algorithm is utilized to achieve a diagonal dominant condition over the extended frequency range of the control system. The minimization is performed on the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal terms of either the inverse or direct open loop transfer function matrix. Several new feedback design concepts were also developed, including: (1) dominance control parameters for each control loop; (2) compensator normalization to evaluate open loop conditions for alternative design configurations; and (3) an interaction index to determine the degree and type of system interaction when all feedback loops are closed simultaneously. This new design capability was implemented on an IBM 360/75 in a batch mode but can be easily adapted to an interactive computer facility. The method was applied to the Pratt and Whitney F100 turbofan engine.

  1. Evaluation of renewable energy alternatives for highway maintenance facilities.

    DOT National Transportation Integrated Search

    2013-12-01

    A considerable annual energy budget is used for heating, lighting, cooling and operating ODOT : maintenance facilities. Such facilities contain vehicle repair and garage bays, which are large open : spaces with high heating demand in winter. The main...

  2. DOE standard 3009 - a reasoned, practical approach to integrating criticality safety into SARs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vessard, S.G.

    1995-12-31

    In the past there have been efforts by the U.S. Department of Energy (DOE) to provide guidance on those elements that should be included in a facility`s safety analysis report (SAR). In particular, there are two DOE Orders (5480.23, {open_quotes}Nuclear Safety Analysis Reports,{close_quotes} and 5480.24, {open_quotes}Nuclear Criticality Safety{close_quotes}), an interpretive guidance document (NE-70, Interpretive Guidance for DOE Order 5480.24, {open_quotes}Nuclear Criticality Safety{close_quotes}), and DOE Standard DOE-STD-3009-94 {open_quotes}Preparation Guide for U.S. Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports.{close_quotes} Of these, the most practical and useful (pertaining to the application of criticality safety) is DOE-STD-3009-94. This paper is a reviewmore » of Chapters 3, 4, and 6 of this standard and how they provide very clear, helpful, and reasoned criticality safety guidance.« less

  3. On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Alunni, Antonella I.

    2012-01-01

    This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.

  4. Opening Our Doors: Taking Public Library Service to Preschool and Day-Care Facilities.

    ERIC Educational Resources Information Center

    Harris, Sally

    The Opening Our Doors Project of the Pioneer Library System of Norman, Oklahoma takes public library service to preschool and day care facilities by means of learning kits housed in tote bags. The sturdy, zippered tote bags are full of books, games, toys, learning folders, and so forth. There is a tote bag for each of 75 different topics. Topics…

  5. Use of Libraries in Open and Distance Learning System: Barriers to the Use of AIOU Libraries by Tutors and Students

    ERIC Educational Resources Information Center

    Bhatti, Abdul Jabbar; Jumani, Nabi Bux

    2012-01-01

    This study explores the library needs of students and tutors of Allama Iqbal Open University (AIOU), utilization level of the library facilities and resources, the problems in the use of library, and suggestions for improvement of library facilities for students and tutors. Data collected from 4080 students and 526 tutors belonging to 15 different…

  6. User interface concerns

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.

    1978-01-01

    Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.

  7. 2016 Annual Report - Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Jim; Papka, Michael E.; Cerny, Beth A.

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  8. Creating a Clinical Video-Conferencing Facility in a Security-Constrained Environment Using Open-Source AccessGrid Software and Consumer Hardware

    PubMed Central

    Terrazas, Enrique; Hamill, Timothy R.; Wang, Ye; Channing Rodgers, R. P.

    2007-01-01

    The Department of Laboratory Medicine at the University of California, San Francisco (UCSF) has been split into widely separated facilities, leading to much time being spent traveling between facilities for meetings. We installed an open-source AccessGrid multi-media-conferencing system using (largely) consumer-grade equipment, connecting 6 sites at 5 separate facilities. The system was accepted rapidly and enthusiastically, and was inexpensive compared to alternative approaches. Security was addressed by aspects of the AG software and by local network administrative practices. The chief obstacles to deployment arose from security restrictions imposed by multiple independent network administration regimes, requiring a drastically reduced list of network ports employed by AG components. PMID:18693930

  9. Creating a clinical video-conferencing facility in a security-constrained environment using open-source AccessGrid software and consumer hardware.

    PubMed

    Terrazas, Enrique; Hamill, Timothy R; Wang, Ye; Channing Rodgers, R P

    2007-10-11

    The Department of Laboratory Medicine at the University of California, San Francisco (UCSF) has been split into widely separated facilities, leading to much time being spent traveling between facilities for meetings. We installed an open-source AccessGrid multi-media-conferencing system using (largely) consumer-grade equipment, connecting 6 sites at 5 separate facilities. The system was accepted rapidly and enthusiastically, and was inexpensive compared to alternative approaches. Security was addressed by aspects of the AG software and by local network administrative practices. The chief obstacles to deployment arose from security restrictions imposed by multiple independent network administration regimes, requiring a drastically reduced list of network ports employed by AG components.

  10. Exploring the Earth Using Deep Learning Techniques

    NASA Astrophysics Data System (ADS)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Research using deep neural networks have significantly matured in recent times, and there is now a surge in interest to apply such methods to Earth systems science and the geosciences. When combined with Big Data, we believe there are opportunities for significantly transforming a number of areas relevant to researchers and policy makers. In particular, by using a combination of data from a range of satellite Earth observations as well as computer simulations from climate models and reanalysis, we can gain new insights into the information that is locked within the data. Global geospatial datasets describe a wide range of physical and chemical parameters, which are mostly available using regular grids covering large spatial and temporal extents. This makes them perfect candidates to apply deep learning methods. So far, these techniques have been successfully applied to image analysis through the use of convolutional neural networks. However, this is only one field of interest, and there is potential for many more use cases to be explored. The deep learning algorithms require fast access to large amounts of data in the form of tensors and make intensive use of CPU in order to train its models. The Australian National Computational Infrastructure (NCI) has recently augmented its Raijin 1.2 PFlop supercomputer with hardware accelerators. Together with NCI's 3000 core high performance OpenStack cloud, these computational systems have direct access to NCI's 10+ PBytes of datasets and associated Big Data software technologies (see http://geonetwork.nci.org.au/ and http://nci.org.au/systems-services/national-facility/nerdip/). To effectively use these computing infrastructures requires that both the data and software are organised in a way that readily supports the deep learning software ecosystem. Deep learning software, such as the open source TensorFlow library, has allowed us to demonstrate the possibility of generating geospatial models by combining information from our different data sources. This opens the door to an exciting new way of generating products and extracting features that have previously been labour intensive. In this paper, we will explore some of these geospatial use cases and share some of the lessons learned from this experience.

  11. A feasibility study on porting the community land model onto accelerators using OpenACC

    DOE PAGES

    Wang, Dali; Wu, Wei; Winkler, Frank; ...

    2014-01-01

    As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less

  12. Health physics challenges involved with opening a "seventeen-inch" concrete waste vault.

    PubMed

    Sullivan, Patrick T; Pizzulli, Michelle

    2005-05-01

    This paper describes the various activities involved with opening a sealed legacy "Seventeen-inch" concrete vault and the health physics challenges and solutions employed. As part of a legacy waste stream that was removed from the former Hazardous Waste Management Facility at Brookhaven National Laboratory, the "Seventeen-inch" concrete vault labeled 1-95 was moved to the new Waste Management Facility for ultimate disposal. Because the vault contained 239Pu foils with a total activity in excess of the transuranic waste limits, the foils needed to be removed and repackaged for disposal. Conventional diamond wire saws could not be used because of facility constraints, so this project relied mainly on manual techniques. The planning and engineering controls put in place enabled personnel to open the vault and remove the waste while keeping dose as low as reasonably achievable.

  13. NASA Langley Low Speed Aeroacoustic Wind Tunnel: Background Noise and Flow Survey Results Prior to FY05 Construction of Facilities Modifications

    NASA Technical Reports Server (NTRS)

    Booth, Earl R., Jr.; Henderson, Brenda S.

    2005-01-01

    The NASA Langley Research Center Low Speed Aeroacoustic Wind Tunnel is a premier facility for model-scale testing of jet noise reduction concepts at realistic flow conditions. However, flow inside the open jet test section is less than optimum. A Construction of Facilities project, scheduled for FY 05, will replace the flow collector with a new design intended to reduce recirculation in the open jet test section. The reduction of recirculation will reduce background noise levels measured by a microphone array impinged by the recirculation flow and will improve flow characteristics in the open jet tunnel flow. In order to assess the degree to which this modification is successful, background noise levels and tunnel flow are documented, in order to establish a baseline, in this report.

  14. Have computers, will travel: providing on-site library instruction in rural health facilities using a portable computer lab.

    PubMed

    Neilson, Christine J

    2010-01-01

    The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.

  15. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|Speedshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-06-30

    Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less

  16. A Comprehensive, Open-source Platform for Mass Spectrometry-based Glycoproteomics Data Analysis.

    PubMed

    Liu, Gang; Cheng, Kai; Lo, Chi Y; Li, Jun; Qu, Jun; Neelamegham, Sriram

    2017-11-01

    Glycosylation is among the most abundant and diverse protein post-translational modifications (PTMs) identified to date. The structural analysis of this PTM is challenging because of the diverse monosaccharides which are not conserved among organisms, the branched nature of glycans, their isomeric structures, and heterogeneity in the glycan distribution at a given site. Glycoproteomics experiments have adopted the traditional high-throughput LC-MS n proteomics workflow to analyze site-specific glycosylation. However, comprehensive computational platforms for data analyses are scarce. To address this limitation, we present a comprehensive, open-source, modular software for glycoproteomics data analysis called GlycoPAT (GlycoProteomics Analysis Toolbox; freely available from www.VirtualGlycome.org/glycopat). The program includes three major advances: (1) "SmallGlyPep," a minimal linear representation of glycopeptides for MS n data analysis. This format allows facile serial fragmentation of both the peptide backbone and PTM at one or more locations. (2) A novel scoring scheme based on calculation of the "Ensemble Score (ES)," a measure that scores and rank-orders MS/MS spectrum for N- and O-linked glycopeptides using cross-correlation and probability based analyses. (3) A false discovery rate (FDR) calculation scheme where decoy glycopeptides are created by simultaneously scrambling the amino acid sequence and by introducing artificial monosaccharides by perturbing the original sugar mass. Parallel computing facilities and user-friendly GUIs (Graphical User Interfaces) are also provided. GlycoPAT is used to catalogue site-specific glycosylation on simple glycoproteins, standard protein mixtures and human plasma cryoprecipitate samples in three common MS/MS fragmentation modes: CID, HCD and ETD. It is also used to identify 960 unique glycopeptides in cell lysates from prostate cancer cells. The results show that the simultaneous consideration of peptide and glycan fragmentation is necessary for high quality MS n spectrum annotation in CID and HCD fragmentation modes. Additionally, they confirm the suitability of GlycoPAT to analyze shotgun glycoproteomics data. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  17. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  18. NIF ICCS network design and loading analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietbohl, G; Bryant, R

    The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less

  19. Computational open-channel hydraulics for movable-bed problems

    USGS Publications Warehouse

    Lai, Chintu; ,

    1990-01-01

    As a major branch of computational hydraulics, notable advances have been made in numerical modeling of unsteady open-channel flow since the beginning of the computer age. According to the broader definition and scope of 'computational hydraulics,' the basic concepts and technology of modeling unsteady open-channel flow have been systematically studied previously. As a natural extension, computational open-channel hydraulics for movable-bed problems are addressed in this paper. The introduction of the multimode method of characteristics (MMOC) has made the modeling of this class of unsteady flows both practical and effective. New modeling techniques are developed, thereby shedding light on several aspects of computational hydraulics. Some special features of movable-bed channel-flow simulation are discussed here in the same order as given by the author in the fixed-bed case.

  20. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  1. Toll facilities in the United States : Bridges, Roads, Tunnels, Ferries

    DOT National Transportation Integrated Search

    1993-02-01

    Selected information on toll facilities in the United States open to the public are contained in this report. The information is based on a survey of facilities in operation, financed, or under construction as of January 1, 1993. The information is p...

  2. A Computational Investigation of Gear Windage

    NASA Technical Reports Server (NTRS)

    Hill, Matthew J.; Kunz, Robert F.

    2012-01-01

    A CFD method has been developed for application to gear windage aerodynamics. The goals of this research are to develop and validate numerical and modeling approaches for these systems, to develop physical understanding of the aerodynamics of gear windage loss, including the physics of loss mitigation strategies, and to propose and evaluate new approaches for minimizing loss. Absolute and relative frame CFD simulation, overset gridding, multiphase flow analysis, and sub-layer resolved turbulence modeling were brought to bear in achieving these goals. Several spur gear geometries were studied for which experimental data are available. Various shrouding configurations and free-spinning (no shroud) cases were studied. Comparisons are made with experimental data from the open literature, and data recently obtained in the NASA Glenn Research Center Gear Windage Test Facility. The results show good agreement with experiment. Interrogation of the validative and exploratory CFD results have led, for the first time, to a detailed understanding of the physical mechanisms of gear windage loss, and have led to newly proposed mitigation strategies whose effectiveness is computationally explored.

  3. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integratesmore » the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.« less

  4. Computer applications making rapid advances in high throughput microbial proteomics (HTMP).

    PubMed

    Anandkumar, Balakrishna; Haga, Steve W; Wu, Hui-Fen

    2014-02-01

    The last few decades have seen the rise of widely-available proteomics tools. From new data acquisition devices, such as MALDI-MS and 2DE to new database searching softwares, these new products have paved the way for high throughput microbial proteomics (HTMP). These tools are enabling researchers to gain new insights into microbial metabolism, and are opening up new areas of study, such as protein-protein interactions (interactomics) discovery. Computer software is a key part of these emerging fields. This current review considers: 1) software tools for identifying the proteome, such as MASCOT or PDQuest, 2) online databases of proteomes, such as SWISS-PROT, Proteome Web, or the Proteomics Facility of the Pathogen Functional Genomics Resource Center, and 3) software tools for applying proteomic data, such as PSI-BLAST or VESPA. These tools allow for research in network biology, protein identification, functional annotation, target identification/validation, protein expression, protein structural analysis, metabolic pathway engineering and drug discovery.

  5. Wake Flow Simulation of a Vertical Axis Wind Turbine Under the Influence of Wind Shear

    NASA Astrophysics Data System (ADS)

    Mendoza, Victor; Goude, Anders

    2017-05-01

    The current trend of the wind energy industry aims for large scale turbines installed in wind farms. This brings a renewed interest in vertical axis wind turbines (VAWTs) since they have several advantages over the traditional Horizontal Axis Wind Tubines (HAWTs) for mitigating the new challenges. However, operating VAWTs are characterized by complex aerodynamics phenomena, presenting considerable challenges for modeling tools. An accurate and reliable simulation tool for predicting the interaction between the obtained wake of an operating VAWT and the flow in atmospheric open sites is fundamental for optimizing the design and location of wind energy facility projects. The present work studies the wake produced by a VAWT and how it is affected by the surface roughness of the terrain, without considering the effects of the ambient turbulence intensity. This study was carried out using an actuator line model (ALM), and it was implemented using the open-source CFD library OpenFOAM to solve the governing equations and to compute the resulting flow fields. An operational H-shaped VAWT model was tested, for which experimental activity has been performed at an open site north of Uppsala-Sweden. Different terrains with similar inflow velocities have been evaluated. Simulated velocity and vorticity of representative sections have been analyzed. Numerical results were validated using normal forces measurements, showing reasonable agreement.

  6. US/USSR cooperative program in open-cycle MHD electrical power generation: joint test report No. 4. Tests in the U-25B facility: MHD generator tests No. 6 and 7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picologlou, B F; Batenin, V M

    1981-01-01

    A description of the main results obtained during Tests No. 6 and 7 at the U-25B Facility using the new channel No. 2 is presented. The purpose of these tests was to operate the MHD generator at its design parameters. Described here are new plasma diagnostic devices: a traversing dual electrical probe for determining distribution of electron concentrations, and a traversing probe that includes a pitot tube for measuring total and static pressure, and a light detector for measuring plasma luminescence. Data are presented on heat flux distribution along the channel, the first data of this type obtained for anmore » MHD facility of such size. Results are given of experimental studies of plasma characteristics, gasdynamic, thermal, and electrical MHD channel performance, and temporal and spatial nonuniformities. Typical modes of operation are analyzed by means of local electrical analyses. Computer models are used to obtain predictions for both localized and overall generator characteristics. These theoretical predictions agree closely with the results of the local analyses, as well as with measurements of the overall gasdynamic and electrical characteristics of the generator.« less

  7. Kauai Test Facility hazards assessment document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swihart, A

    1995-05-01

    The Department of Energy Order 55003A requires facility-specific hazards assessment be prepared, maintained, and used for emergency planning purposes. This hazards assessment document describes the chemical and radiological hazards associated with the Kauai Test Facility, Barking Sands, Kauai, Hawaii. The Kauai Test Facility`s chemical and radiological inventories were screened according to potential airborne impact to onsite and offsite individuals. The air dispersion model, ALOHA, estimated pollutant concentrations downwind from the source of a release, taking into consideration the toxicological and physical characteristics of the release site, the atmospheric conditions, and the circumstances of the release. The greatest distance to themore » Early Severe Health Effects threshold is 4.2 kilometers. The highest emergency classification is a General Emergency at the {open_quotes}Main Complex{close_quotes} and a Site Area Emergency at the Kokole Point Launch Site. The Emergency Planning Zone for the {open_quotes}Main Complex{close_quotes} is 5 kilometers. The Emergency Planning Zone for the Kokole Point Launch Site is the Pacific Missile Range Facility`s site boundary.« less

  8. A Hybrid Tabu Search Heuristic for a Bilevel Competitive Facility Location Model

    NASA Astrophysics Data System (ADS)

    Küçükaydın, Hande; Aras, Necati; Altınel, I. Kuban

    We consider a problem in which a firm or franchise enters a market by locating new facilities where there are existing facilities belonging to a competitor. The firm aims at finding the location and attractiveness of each facility to be opened so as to maximize its profit. The competitor, on the other hand, can react by adjusting the attractiveness of its existing facilities, opening new facilities and/or closing existing ones with the objective of maximizing its own profit. The demand is assumed to be aggregated at certain points in the plane and the facilities of the firm can be located at prespecified candidate sites. We employ Huff's gravity-based rule in modeling the behavior of the customers where the fraction of customers at a demand point that visit a certain facility is proportional to the facility attractiveness and inversely proportional to the distance between the facility site and demand point. We formulate a bilevel mixed-integer nonlinear programming model where the firm entering the market is the leader and the competitor is the follower. In order to find a feasible solution of this model, we develop a hybrid tabu search heuristic which makes use of two exact methods as subroutines: a gradient ascent method and a branch-and-bound algorithm with nonlinear programming relaxation.

  9. Ab initio molecular orbital and density functional studies on the ring-opening reaction of oxetene.

    PubMed

    Jayaprakash, S; Jeevanandam, Jebakumar; Subramani, K

    2014-11-01

    Electrocyclic ring opening (ERO) reaction of 2H-Oxete (oxetene) has been carried out computationally in the gas phase and ring opening barrier has been computed. When comparing the ERO reaction of oxetene with the parent hydrocarbon (cyclobutene), the ring opening of cyclobutene is found to exhibit pericyclic behavior while oxetene shows mild pseudopericyclic nature. Computation of the nucleus-independent chemical shift (NICS) of oxetene adds evidence for pseudopericyclic behavior of oxetene. By locking of lone pair of electrons by hydrogen bonding, it is seen that the pseudopericyclic nature of the ring opening of oxetene is converted into a pericyclic one. CASSCF(5,6)/6-311+G** computation was carried out to understand the extent of involvement of lone pair of electrons during the course of the reaction. CR-CCSD(T)/6-311+G** computation was performed to assess the energies of the reactant, transition state and the product more accurately.

  10. OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux

    NASA Astrophysics Data System (ADS)

    Gunawan, P. H.; Pahlevi, M. R.

    2018-03-01

    The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.

  11. Variability of hydrologic regimes and morphology in constructed open-ditch channels

    USGS Publications Warehouse

    Strock, J.S.; Magner, J.A.; Richardson, W.B.; Sadowsky, M.J.; Sands, G.R.; Venterea, R.T.; ,

    2004-01-01

    Open-ditch ecosystems are potential transporters of considerable loads of nutrients, sediment, pathogens and pesticides from direct inflow from agricultural land to small streams and larger rivers. Our objective was to compare hydrology and channel morphology between two experimental open-ditch channels. An open-ditch research facility incorporating a paired design was constructed during 2002 near Lamberton, MN. A200-m reach of existing drainage channel was converted into a system of four parallel channels. The facility was equipped with water level control devices and instrumentation for flow monitoring and water sample collection on upstream and downstream ends of the system. Hydrographs from simulated flow during year one indicated that paired open-ditch channels responded similarly to changes in inflow. Variability in hydrologic response between open-ditches was attributed to differences in open-ditch channel bottom elevation and vegetation density. No chemical, biological, or atmospheric measurements were made during 2003. Potential future benefits of this research include improved biological diversity and integrity of open-ditch ecosystems, reduce flood peaks and increased flow during critical low-flow periods, improved and more efficient nitrogen retention within the open-ditch ecosystem, and decreased maintenance cost associated with reduced frequency of open-ditch maintenance.

  12. How Collecting and Freely Sharing Geophysical Data Broadly Benefits Society

    NASA Astrophysics Data System (ADS)

    Frassetto, A.; Woodward, R.; Detrick, R. S.

    2017-12-01

    Valuable but often unintended observations of environmental and human-related processes have resulted from open sharing of multidisciplinary geophysical observations collected over the past 33 years. These data, intended to fuel fundamental academic research, are part of the Incorporated Research Institutions for Seismology (IRIS), which is sponsored by the National Science Foundation and has provided a community science facility supporting earthquake science and related disciplines since 1984. These community facilities have included arrays of geophysical instruments operated for EarthScope, an NSF-sponsored science initiative designed to understand the architecture and evolution of the North American continent, as well as the Global Seismographic Network, Greenland Ice Sheet Monitoring Network, a repository of data collected around the world, and other community assets. All data resulting from this facility have been made openly available to support researchers across any field of study and this has expanded the impact of these data beyond disciplinary boundaries. This presentation highlights vivid examples of how basic research activities using open data, collected as part of a community facility, can inform our understanding of manmade earthquakes, geomagnetic hazards, climate change, and illicit testing of nuclear weapons.

  13. 36 CFR 1280.92 - When are the Presidential library museums open to the public?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... library museums open to the public? 1280.92 Section 1280.92 Parks, Forests, and Public Property NATIONAL... Use of Facilities in Presidential Libraries? § 1280.92 When are the Presidential library museums open to the public? (a) The Presidential library museums are open every day except Thanksgiving, December...

  14. 36 CFR 1280.92 - When are the Presidential library museums open to the public?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... library museums open to the public? 1280.92 Section 1280.92 Parks, Forests, and Public Property NATIONAL... Use of Facilities in Presidential Libraries? § 1280.92 When are the Presidential library museums open to the public? (a) The Presidential library museums are open every day except Thanksgiving, December...

  15. 36 CFR 1280.92 - When are the Presidential library museums open to the public?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... library museums open to the public? 1280.92 Section 1280.92 Parks, Forests, and Public Property NATIONAL... Use of Facilities in Presidential Libraries? § 1280.92 When are the Presidential library museums open to the public? (a) The Presidential library museums are open every day except Thanksgiving, December...

  16. 40 CFR 265.1056 - Standards: Open-ended valves or lines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Air Emission Standards for Equipment Leaks § 265.1056 Standards: Open-ended valves or lines. (a)(1) Each open-ended valve or line shall be equipped with a cap, blind flange... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Standards: Open-ended valves or lines...

  17. NREL Open House Features Energy Activities, Tours

    Science.gov Websites

    Open House Features Energy Activities, Tours For more information contact: e:mail: Public Affairs National Renewable Energy Laboratory (NREL) will open its doors 10 a.m. to 3 p.m., Saturday, July 24 for tours of its research facilities and interactive exhibits at the Visitors Center. The Open House

  18. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  19. KSC-2013-2975

    NASA Image and Video Library

    2013-06-28

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, center director Bob Cabana speaks to news media representatives during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  20. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    PubMed

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  1. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  2. KSC-2013-2996

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, CNN correspondent John Zarrella counted down for the ceremonial opening of the new "Space Shuttle Atlantis" facility. Smoke bellows near a full-scale set of space shuttle twin solid rocket boosters and external fuel tank at the entrance to the exhibit building. Guests may walk beneath the 184-foot-tall boosters and tank as they enter the facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  3. NFFA-Europe: enhancing European competitiveness in nanoscience research and innovation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Carsughi, Flavio; Fonseca, Luis

    2017-06-01

    NFFA-EUROPE is an European open access resource for experimental and theoretical nanoscience and sets out a platform to carry out comprehensive projects for multidisciplinary research at the nanoscale extending from synthesis to nanocharacterization to theory and numerical simulation. Advanced infrastructures specialized on growth, nano-lithography, nano-characterization, theory and simulation and fine-analysis with Synchrotron, FEL and Neutron radiation sources are integrated in a multi-site combination to develop frontier research on methods for reproducible nanoscience research and to enable European and international researchers from diverse disciplines to carry out advanced proposals impacting science and innovation. NFFA-EUROPE will enable coordinated access to infrastructures on different aspects of nanoscience research that is not currently available at single specialized ones and without duplicating their specific scopes. Approved user projects will have access to the best suited instruments and support competences for performing the research, including access to analytical large scale facilities, theory and simulation and high-performance computing facilities. Access is offered free of charge to European users and users will receive a financial contribution for their travel, accommodation and subsistence costs. The users access will include several "installations" and will be coordinated through a single entry point portal that will activate an advanced user-infrastructure dialogue to build up a personalized access programme with an increasing return on science and innovation production. The own research activity of NFFA-EUROPE will address key bottlenecks of nanoscience research: nanostructure traceability, protocol reproducibility, in-operando nano-manipulation and analysis, open data.

  4. Simulation of partially coherent light propagation using parallel computing devices

    NASA Astrophysics Data System (ADS)

    Magalhães, Tiago C.; Rebordão, José M.

    2017-08-01

    Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.

  5. Multi-source Geospatial Data Analysis with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Erickson, T.

    2014-12-01

    The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org

  6. Office of Science User Facilities Summary Report, Fiscal Year 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-01-01

    The U.S. Department of Energy Office of Science provides the Nation’s researchers with worldclass scientific user facilities to propel the U.S. to the forefront of science and innovation. A user facility is a federally sponsored research facility available for external use to advance scientific or technical knowledge under the following conditions: open, accessible, free, collaborative, competitive, and unique.

  7. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  8. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  9. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times lasting up to full 20 CPU days. The deployment of GRASS GIS on a compute cluster allows our users to tackle GIS tasks previously out of reach of single workstations. In addition, this GRASS GIS cluster implementation will be made available to other users at GFZ in the course of 2012. It will thus become a research utility in the sense of "Software as a Service" (SaaS) and can be seen as our first step towards building a GFZ corporate cloud service.

  10. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  11. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  12. A History of Educational Facilities Laboratories (EFL)

    ERIC Educational Resources Information Center

    Marks, Judy

    2009-01-01

    The Educational Facilities Laboratories (EFL), an independent research organization established by the Ford Foundation, opened its doors in 1958 under the direction of Harold B. Gores, a distinguished educator. Its purpose was to help schools and colleges maximize the quality and utility of their facilities, stimulate research, and disseminate…

  13. 25 CFR 247.13 - Are the facilities available year around?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... tribe can contact the Area Director and ask that the sites be opened. The Area Director will work... RIVER TREATY FISHING ACCESS SITES § 247.13 Are the facilities available year around? (a) The Area... necessary. Before closing the facilities, the Area Director will consult with delegated tribal...

  14. The Role of Standards in Cloud-Computing Interoperability

    DTIC Science & Technology

    2012-10-01

    services are not shared outside the organization. CloudStack, Eucalyptus, HP, Microsoft, OpenStack , Ubuntu, and VMWare provide tools for building...center requirements • Developing usage models for cloud ven- dors • Independent IT consortium OpenStack http://www.openstack.org • Open-source...software for running private clouds • Currently consists of three core software projects: OpenStack Compute (Nova), OpenStack Object Storage (Swift

  15. KSC-2013-2994

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, CNN correspondent John Zarrella counts down for the ceremonial opening of the new "Space Shuttle Atlantis" facility. Ready to press buttons to mark the opening the new exhibit, from the left, are Charlie Bolden, NASA administrator, Bob Cabana, Kennedy director, Rick Abramson, Delaware North Parks and Resorts president, and Bill Moore, Delaware North Parks and Resorts chief operating officer. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  16. Dissipative quantum computing with open quantum walks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinayskiy, Ilya; Petruccione, Francesco

    An open quantum walk approach to the implementation of a dissipative quantum computing scheme is presented. The formalism is demonstrated for the example of an open quantum walk implementation of a 3 qubit quantum circuit consisting of 10 gates.

  17. Supply chain network design problem for a new market opportunity in an agile manufacturing system

    NASA Astrophysics Data System (ADS)

    Babazadeh, Reza; Razmi, Jafar; Ghodsi, Reza

    2012-08-01

    The characteristics of today's competitive environment, such as the speed with which products are designed, manufactured, and distributed, and the need for higher responsiveness and lower operational cost, are forcing companies to search for innovative ways to do business. The concept of agile manufacturing has been proposed in response to these challenges for companies. This paper copes with the strategic and tactical level decisions in agile supply chain network design. An efficient mixed-integer linear programming model that is able to consider the key characteristics of agile supply chain such as direct shipments, outsourcing, different transportation modes, discount, alliance (process and information integration) between opened facilities, and maximum waiting time of customers for deliveries is developed. In addition, in the proposed model, the capacity of facilities is determined as decision variables, which are often assumed to be fixed. Computational results illustrate that the proposed model can be applied as a power tool in agile supply chain network design as well as in the integration of strategic decisions with tactical decisions.

  18. Optimization of groundwater artificial recharge systems using a genetic algorithm: a case study in Beijing, China

    NASA Astrophysics Data System (ADS)

    Hao, Qichen; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Huang, Linxian

    2018-05-01

    An optimization approach is used for the operation of groundwater artificial recharge systems in an alluvial fan in Beijing, China. The optimization model incorporates a transient groundwater flow model, which allows for simulation of the groundwater response to artificial recharge. The facilities' operation with regard to recharge rates is formulated as a nonlinear programming problem to maximize the volume of surface water recharged into the aquifers under specific constraints. This optimization problem is solved by the parallel genetic algorithm (PGA) based on OpenMP, which could substantially reduce the computation time. To solve the PGA with constraints, the multiplicative penalty method is applied. In addition, the facilities' locations are implicitly determined on the basis of the results of the recharge-rate optimizations. Two scenarios are optimized and the optimal results indicate that the amount of water recharged into the aquifers will increase without exceeding the upper limits of the groundwater levels. Optimal operation of this artificial recharge system can also contribute to the more effective recovery of the groundwater storage capacity.

  19. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of thesemore » we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.« less

  1. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  2. Using IKAROS as a data transfer and management utility within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. IKAROS is a framework that enables creating scalable storage formations on-demand and helps addressing several limitations that the current file systems face when dealing with very large scale infrastructures. It enables creating ad-hoc nearby storage formations and can use a huge number of I/O nodes in order to increase the available bandwidth (I/O and network). IKAROS unifies remote and local access in the overall data flow, by permitting direct access to each I/O node. In this way we can handle the overall data flow at the network layer, limiting the interaction with the operating system. This approach allows virtually connecting, at the users level, the several different computing facilities used (Grids, Clouds, HPCs, Data Centers, Local computing Clusters and personal storage devices), on-demand, based on the needs, by using well known standards and protocols, like HTTP.

  3. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  4. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  5. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  6. LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2018-01-24

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  7. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yelick, Kathy

    2012-02-02

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  8. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2017-12-09

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  9. 44 CFR 206.226 - Restoration of damaged facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... original site will be approved, except those facilities which facilitate an open space use in accordance... replacement items. (i) Library books and publications. Replacement of library books and publications is based...

  10. 44 CFR 206.226 - Restoration of damaged facilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... original site will be approved, except those facilities which facilitate an open space use in accordance... replacement items. (i) Library books and publications. Replacement of library books and publications is based...

  11. Sales Training Center

    ERIC Educational Resources Information Center

    Training in Business and Industry, 1971

    1971-01-01

    After a year of planning, the American Republic Insurance Company has opened a new training facility which occupies a complete floor of the National Headquarters building in Des Moines. Pictures of the facilities are shown. (EB)

  12. KSC-2013-2938

    NASA Image and Video Library

    2013-06-27

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, a display inside the "Space Shuttle Atlantis" facility features a 43-feet-tall full-scale replica of the Hubble telescope hung through an opening in the second floor. The new $100 million facility will include interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit is scheduled to open June 29, 2013.Photo credit: NASA/Jim Grossmann

  13. KSC-2013-2992

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, CNN correspondent John Zarrella speaks to guests at the opening of the new "Space Shuttle Atlantis" facility. Zarrella served as master of ceremonies for the event. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  14. KSC-2013-2988

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, CNN correspondent John Zarrella speaks to guests at the opening of the new "Space Shuttle Atlantis" facility. Zarrella served as master of ceremonies for the event. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  15. KSC-2013-2984

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, CNN correspondent John Zarrella speaks to guests at the opening of the new "Space Shuttle Atlantis" facility. Zarrella served as master of ceremonies for the event. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  16. KSC-2013-2990

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- During opening ceremonies for the new 90,000-square-foot "Space Shuttle Atlantis" facility at the Kennedy Space Center Visitor Complex in Florida, NASA Administrator Charlie Bolden speaks to guests gathered for the ceremony. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  17. KSC-2013-2989

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- During opening ceremonies for the new 90,000-square-foot "Space Shuttle Atlantis" facility at the Kennedy Space Center Visitor Complex in Florida, center director Bob Cabana speaks to guests gathered for the ceremony. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  18. KSC-2013-2986

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Rick Abramson, Delaware North Parks and Resorts president, speaks to guests during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  19. KSC-2013-2985

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Rick Abramson, Delaware North Parks and Resorts president, speaks to guests during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  20. KSC-2013-2974

    NASA Image and Video Library

    2013-06-28

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Rick Abramson, Delaware North Parks and Resorts president speaks to news media representatives during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  1. KSC-2013-2998

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- Inside the new "Space Shuttle Atlantis" facility at the Kennedy Space Center Visitor Complex in Florida, guests gather around the spacecraft on display with payload bay doors open and remote manipulator system robot arm extended. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  2. KSC-2013-2997

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- Inside the new "Space Shuttle Atlantis" facility at the Kennedy Space Center Visitor Complex in Florida, 40 astronauts posed with the spacecraft on display with payload bay doors open and remote manipulator system robot arm extended. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  3. Ethics and the 7 `P`s` of computer use policies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, T.J.; Voss, R.B.

    1994-12-31

    A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.

  4. Do disk drives dream of buffer cache hits?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holt, A.

    1994-12-31

    G.E. Moore, in his book Principia Ethica, examines the popular view of ethics that deals with {open_quotes}what we ought to do{close_quotes} as well as using ethics to cover the general inquiry: {open_quotes}what is good?{close_quotes} This paper utilises Moore`s view of Ethics to examine computer systems performance. Moore asserts that {open_quotes}good{close_quotes} in itself is indefinable. It is argued in this report that, although we describe computer systems as good (or bad) a computer system cannot be good in itself, rather a means to good! In terms of {open_quotes}what we ought to do{close_quotes} this paper looks at what actions (would) bring aboutmore » good computer system performance according to computer science and engineering literature. In particular we look at duties, responsibilities and {open_quotes}to do what is right{close_quotes} in terms of system administration, design and usage. We further argue that in order to first make ethical observations with respect computer system performance and then apply them, requires technical knowledge which is typically limited to industry specialists and experts.« less

  5. Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.

    2010-12-01

    We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sensors. A scalable architecture based on cloud computing ensures cost-effective, real-time processing and delivery of NPP and other data. Access via standard Web services maximizes its interoperability and usefulness.

  6. Requirements and Regulations for Open Burning and Fire Training

    EPA Pesticide Factsheets

    Intentional burning of facilities is considered demolition under federal asbestos regulations, even if no asbestos is present. Learn about regulations and requirements for open burning and fire training.

  7. First Facility Utilization Manual. A Teachers Guide to the Use of the FLNT Elementary School. Fort Lincoln New Town Education System.

    ERIC Educational Resources Information Center

    General Learning Corp., Washington, DC.

    This guide endeavors to teach the faculty how to manipulate the structure of the new facility in the most creative way. The first chapters discuss the interior design, graphic considerations within the facility, materials and equipment suited for open space schools, and recommended audio-systems. Later chapters cover the exterior facilities, such…

  8. NREL's Energy Systems Integration Supporting Facilities - Continuum

    Science.gov Websites

    Integration Facility opened in December, 2012. Photo by Dennis Schroeder, NREL NREL's Energy Systems capabilities. Photo by Dennis Schroeder, NREL This research electrical distribution bus (REDB) works as a power

  9. Getting started with Open-Hardware: Development and Control of Microfluidic Devices

    PubMed Central

    da Costa, Eric Tavares; Mora, Maria F.; Willis, Peter A.; do Lago, Claudimir L.; Jiao, Hong; Garcia, Carlos D.

    2014-01-01

    Understanding basic concepts of electronics and computer programming allows researchers to get the most out of the equipment found in their laboratories. Although a number of platforms have been specifically designed for the general public and are supported by a vast array of on-line tutorials, this subject is not normally included in university chemistry curricula. Aiming to provide the basic concepts of hardware and software, this article is focused on the design and use of a simple module to control a series of PDMS-based valves. The module is based on a low-cost microprocessor (Teensy) and open-source software (Arduino). The microvalves were fabricated using thin sheets of PDMS and patterned using CO2 laser engraving, providing a simple and efficient way to fabricate devices without the traditional photolithographic process or facilities. Synchronization of valve control enabled the development of two simple devices to perform injection (1.6 ± 0.4 μL/stroke) and mixing of different solutions. Furthermore, a practical demonstration of the utility of this system for microscale chemical sample handling and analysis was achieved performing an on-chip acid-base titration, followed by conductivity detection with an open-source low-cost detection system. Overall, the system provided a very reproducible (98%) platform to perform fluid delivery at the microfluidic scale. PMID:24823494

  10. The Simple Concurrent Online Processing System (SCOPS) - An open-source interface for remotely sensed data processing

    NASA Astrophysics Data System (ADS)

    Warren, M. A.; Goult, S.; Clewley, D.

    2018-06-01

    Advances in technology allow remotely sensed data to be acquired with increasingly higher spatial and spectral resolutions. These data may then be used to influence government decision making and solve a number of research and application driven questions. However, such large volumes of data can be difficult to handle on a single personal computer or on older machines with slower components. Often the software required to process data is varied and can be highly technical and too advanced for the novice user to fully understand. This paper describes an open-source tool, the Simple Concurrent Online Processing System (SCOPS), which forms part of an airborne hyperspectral data processing chain that allows users accessing the tool over a web interface to submit jobs and process data remotely. It is demonstrated using Natural Environment Research Council Airborne Research Facility (NERC-ARF) instruments together with other free- and open-source tools to take radiometrically corrected data from sensor geometry into geocorrected form and to generate simple or complex band ratio products. The final processed data products are acquired via an HTTP download. SCOPS can cut data processing times and introduce complex processing software to novice users by distributing jobs across a network using a simple to use web interface.

  11. 2008 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P systemmore » at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.« less

  12. Library Facility Siting and Location Handbook. The Greenwood Library Management Collection.

    ERIC Educational Resources Information Center

    Koontz, Christine M.

    This handbook is a guide to the complex process of library facility siting and location. It includes relevant research and professionals' siting experiences, as well as actual case studies of closures, openings, mergers, and relocations of library facilities. While the bulk of the volume provides practical information, the work also presents an…

  13. Facility Accessibility: Opening the Doors to All

    ERIC Educational Resources Information Center

    Petersen, Jeffrey C.; Piletic, Cindy K.

    2006-01-01

    A facility developed for fitness, physical activity, recreation, or sport is a vital community resource that contributes to the overall health and wellness of that community's citizens. In order to maximize the benefits derived from these facilities, it is imperative that they be accessible to as wide a range of people as possible. The Americans…

  14. 2009 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Martin, D.; Drugan, C.

    2010-11-23

    This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less

  15. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  16. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    PubMed

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  17. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  18. OpenChrom: a cross-platform open source software for the mass spectrometric analysis of chromatographic data.

    PubMed

    Wenig, Philip; Odermatt, Juergen

    2010-07-30

    Today, data evaluation has become a bottleneck in chromatographic science. Analytical instruments equipped with automated samplers yield large amounts of measurement data, which needs to be verified and analyzed. Since nearly every GC/MS instrument vendor offers its own data format and software tools, the consequences are problems with data exchange and a lack of comparability between the analytical results. To challenge this situation a number of either commercial or non-profit software applications have been developed. These applications provide functionalities to import and analyze several data formats but have shortcomings in terms of the transparency of the implemented analytical algorithms and/or are restricted to a specific computer platform. This work describes a native approach to handle chromatographic data files. The approach can be extended in its functionality such as facilities to detect baselines, to detect, integrate and identify peaks and to compare mass spectra, as well as the ability to internationalize the application. Additionally, filters can be applied on the chromatographic data to enhance its quality, for example to remove background and noise. Extended operations like do, undo and redo are supported. OpenChrom is a software application to edit and analyze mass spectrometric chromatographic data. It is extensible in many different ways, depending on the demands of the users or the analytical procedures and algorithms. It offers a customizable graphical user interface. The software is independent of the operating system, due to the fact that the Rich Client Platform is written in Java. OpenChrom is released under the Eclipse Public License 1.0 (EPL). There are no license constraints regarding extensions. They can be published using open source as well as proprietary licenses. OpenChrom is available free of charge at http://www.openchrom.net.

  19. The large building to the left formerly served as open ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    The large building to the left formerly served as open hearth no. 3 steel making facility; it was erected in 1903; looking east - Bethlehem Steel Corporation, South Bethlehem Works, Open Hearth No. 3, Along Lehigh River, North of Fourth Street, West of Minsi Trail Bridge, Bethlehem, Northampton County, PA

  20. 36 CFR § 1280.92 - When are the Presidential library museums open to the public?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... library museums open to the public? § 1280.92 Section § 1280.92 Parks, Forests, and Public Property... Apply for Use of Facilities in Presidential Libraries? § 1280.92 When are the Presidential library museums open to the public? (a) The Presidential library museums are open every day except Thanksgiving...

  1. D.C. Public School 1997 Repair Program and Facilities Master Plan. Hearing before the Subcommittee on the District of Columbia of the Committee on Government Reform and Oversight. House of Representatives. One Hundred Fifth Congress, Second Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Government Reform and Oversight.

    A Congressional hearing dealt with issues related to the repair program and facilities master plan of the District of Columbia Public Schools (DCPS). Opening remarks by Representative Thomas M. Davis outlined his concern over the delayed opening of the DCPS in the fall of 1997 because of uncompleted roof repairs, and the results from a performance…

  2. KSC-2013-2972

    NASA Image and Video Library

    2013-06-28

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Bill Moore, Delaware North Parks and Resorts chief operating officer speaks to news media representatives during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  3. KSC-2013-2976

    NASA Image and Video Library

    2013-06-28

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Andrea Farmer, Delaware North Parks and Resorts manager of Public Relations speaks to news media representatives during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  4. KSC-2013-2987

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- During opening ceremonies for the new 90,000-square-foot "Space Shuttle Atlantis" facility at the Kennedy Space Center Visitor Complex in Florida, Expedition 36 flight engineers Karen Nyberg, left, and Chris Cassidy speak to guests via television from the International Space Station. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  5. KSC-2013-2977

    NASA Image and Video Library

    2013-06-28

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, Bill Moore, Delaware North Parks and Resorts chief operating officer speaks to news media representatives during the opening of the 90,000-square-foot "Space Shuttle Atlantis" facility. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  6. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  7. Final closure plan for the high-explosives open burn treatment facility at Lawrence Livermore National Laboratory Experimental Test Site 300

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathews, S.

    This document addresses the interim status closure of the HE Open Bum Treatment Facility, as detailed by Title 22, Division 4.5, Chapter 15, Article 7 of the Califonia Code of Regulations (CCR) and by Title 40, Code of Federal Regulations (CFR) Part 265, Subpart G, ``Closure and Post Closure.`` The Closure Plan (Chapter 1) and the Post- Closure Plan (Chapter 2) address the concept of long-term hazard elimination. The Closure Plan provides for capping and grading the HE Open Bum Treatment Facility and revegetating the immediate area in accordance with applicable requirements. The Closure Plan also reflects careful consideration ofmore » site location and topography, geologic and hydrologic factors, climate, cover characteristics, type and amount of wastes, and the potential for contaminant migration. The Post-Closure Plan is designed to allow LLNL to monitor the movement, if any, of pollutants from the treatment area. In addition, quarterly inspections will ensure that all surfaces of the closed facility, including the cover and diversion ditches, remain in good repair, thus precluding the potential for contaminant migration.« less

  8. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  9. 63. Refrigerator, microwave oven, storage cabinet open, north side ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    63. Refrigerator, microwave oven, storage cabinet open, north side - Ellsworth Air Force Base, Delta Flight, Launch Control Facility, County Road CS23A, North of Exit 127, Interior, Jackson County, SD

  10. Enabling Access to High-Resolution Lidar Topography for Earth Science Research

    NASA Astrophysics Data System (ADS)

    Crosby, Christopher; Nandigam, Viswanath; Arrowsmith, Ramon; Baru, Chaitan

    2010-05-01

    High-resolution topography data acquired with lidar (light detection and ranging a.k.a. laser scanning) technology are revolutionizing the way we study the geomorphic processes acting along the Earth's surface. These data, acquired from either an airborne platform or from a tripod-mounted scanner, are emerging as a fundamental tool for research on a variety of topics ranging from earthquake hazards to ice sheet dynamics. Lidar topography data allow earth scientists to study the processes that contribute to landscape evolution at resolutions not previously possible yet essential for their appropriate representation. These datasets also have significant implications for earth science education and outreach because they provide an accurate digital representation of landforms and geologic hazards. However, along with the potential of lidar topography comes an increase in the volume and complexity of data that must be efficiently managed, archived, distributed, processed and integrated in order for them to be of use to the community. A single lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative imagery. This massive volume of data is often difficult to manage and poses significant distribution challenges when trying to allow access to the data for a large scientific user community. Furthermore, the datasets can be technically challenging to work with and may require specific software and computing resources that are not readily available to many users. The U.S. National Science Foundation (NSF)-funded OpenTopography Facility (http://www.opentopography.org) is an online data access and processing system designed to address the challenges posed by lidar data, and to democratize access to these data for the scientific user community. OpenTopography provides free, online access to lidar data in a number of forms, including raw lidar point cloud data, standard DEMs, and easily accessible Google Earth visualizations. OpenTopography uses cyberinfrastructure resources to allow users, regardless of their level of expertise, to access lidar data products that can be applied to their research. In addition to data access, the system uses customized algorithms and high-performance computing resources to allow users to perform on-the-fly data processing tasks such as the generation of custom DEMs. OpenTopography's primarily focus is on large, community-oriented, scientific data sets, such as those acquired by the NSF-funded EarthScope project. We are actively expanding our holdings through collaborations with researchers and data providers to include data from a wide variety of landscapes and geologic domains. Ultimately, the goal is for OpenTopography to be the primary clearing house for Earth science-oriented high-resolution topography. This presentation will provide an overview of the OpenTopography Facility, including available data, processing capabilities and resources, examples from scientific use cases, and a snapshot of system and data usage thus far. We will also discuss current development activities related to deploying high-performance algorithms for hydrologic processing of DEMs, geomorphic change detection analysis, and the incorporation of full waveform lidar data into the system.

  11. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  12. Issuance of a final RCRA Part B Subpart X permit for open burning/open detonation (OB/OD) of explosives at Eglin AFB, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, G.E.; Culp, J.C.; Jenness, S.R.

    1997-12-31

    Treatment and disposal of explosives and munitions items have represented a significant management challenge for Department of Defense (DOD) facilities, particularly in light of increased regulatory scrutiny under the Federal Facilities Compliance Act provisions of the Resource Conservation and Recovery Act (RCRA). Subpart X of the RCRA regulations for storage, treatment, and disposal of hazardous wastes was drafted specifically to address explosive wastes. Until just recently, any DOD facility that was performing open burning/open detonation (OB/OD) of explosives was doing so under interim status for RCRA Part B Subpart X. In August 1996, Eglin Air Force Base (AFB), Florida becamemore » the first Air Force facility to be issued a final Part B Subpart X permit to perform OB/OD operations at two Eglin AFB active test ranges. This presentation will examine how Eglin AFB worked proactively with the State of Florida Department of Environmental Protection (FDEP) and EPA Region IV to develop permit conditions based upon risk assessment considerations for both air and ground-water exposure pathways. It will review the role of air emissions and air dispersion modeling in assessing potential exposure and impacts to both onsite and offsite receptors, and will discuss how air monitoring will be used to assure that the facility remains in compliance during OB/OD activities. The presentation will also discuss the soil and ground-water characterization program and associated risk assessment provisions for quarterly ground-water monitoring to assure permit compliance. The project is an excellent example of how a collaborative working relationship among the permittee, their consultant and state, and EPA can result in an environmentally protective permit that assures operational flexibility and mission sensitivity.« less

  13. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  14. Molecular detection of canine parvovirus in flies (Diptera) at open and closed canine facilities in the eastern United States.

    PubMed

    Bagshaw, Clarence; Isdell, Allen E; Thiruvaiyaru, Dharma S; Brisbin, I Lehr; Sanchez, Susan

    2014-06-01

    More than thirty years have passed since canine parvovirus (CPV) emerged as a significant pathogen and it continues to pose a severe threat to world canine populations. Published information suggests that flies (Diptera) may play a role in spreading this virus; however, they have not been studied extensively and the degree of their involvement is not known. This investigation was directed toward evaluating the vector capacity of such flies and determining their potential role in the transmission and ecology of CPV. Molecular diagnostic methods were used in this cross-sectional study to detect the presence of CPV in flies trapped at thirty-eight canine facilities. The flies involved were identified as belonging to the house fly (Mucidae), flesh fly (Sarcophagidae) and blow/bottle fly (Calliphoridae) families. A primary surveillance location (PSL) was established at a canine facility in south-central South Carolina, USA, to identify fly-virus interaction within the canine facility environment. Flies trapped at this location were pooled monthly and assayed for CPV using polymerase chain reaction (PCR) methods. These insects were found to be positive for CPV every month from February through the end of November 2011. Fly vector behavior and seasonality were documented and potential environmental risk factors were evaluated. Statistical analyses were conducted to compare the mean numbers of each of the three fly families captured, and after determining fly CPV status (positive or negative), it was determined whether there were significant relationships between numbers of flies captured, seasonal numbers of CPV cases, temperature and rainfall. Flies were also sampled at thirty-seven additional canine facility surveillance locations (ASL) and at four non-canine animal industry locations serving as negative field controls. Canine facility risk factors were identified and evaluated. Statistical analyses were conducted on the number of CPV cases reported within the past year to determine the correlation of fly CPV status (positive or negative) for each facility, facility design (open or closed), mean number of dogs present monthly and number of flies captured. Significant differences occurred between fly CPV positive vs. negative sites with regard to their CPV case numbers, fly numbers captured, and number of dogs present. At the ASL, a statistically significant relationship was found between PCR-determined fly CPV status (positive or negative) and facility design (open vs. closed). Open-facility designs were likely to have more CPV outbreaks and more likely to have flies testing positive for CPV DNA. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet

    1998-01-01

    The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.

  16. Open Source Drug Discovery in Practice: A Case Study

    PubMed Central

    Årdal, Christine; Røttingen, John-Arne

    2012-01-01

    Background Open source drug discovery offers potential for developing new and inexpensive drugs to combat diseases that disproportionally affect the poor. The concept borrows two principle aspects from open source computing (i.e., collaboration and open access) and applies them to pharmaceutical innovation. By opening a project to external contributors, its research capacity may increase significantly. To date there are only a handful of open source R&D projects focusing on neglected diseases. We wanted to learn from these first movers, their successes and failures, in order to generate a better understanding of how a much-discussed theoretical concept works in practice and may be implemented. Methodology/Principal Findings A descriptive case study was performed, evaluating two specific R&D projects focused on neglected diseases. CSIR Team India Consortium's Open Source Drug Discovery project (CSIR OSDD) and The Synaptic Leap's Schistosomiasis project (TSLS). Data were gathered from four sources: interviews of participating members (n = 14), a survey of potential members (n = 61), an analysis of the websites and a literature review. Both cases have made significant achievements; however, they have done so in very different ways. CSIR OSDD encourages international collaboration, but its process facilitates contributions from mostly Indian researchers and students. Its processes are formal with each task being reviewed by a mentor (almost always offline) before a result is made public. TSLS, on the other hand, has attracted contributors internationally, albeit significantly fewer than CSIR OSDD. Both have obtained funding used to pay for access to facilities, physical resources and, at times, labor costs. TSLS releases its results into the public domain, whereas CSIR OSDD asserts ownership over its results. Conclusions/Significance Technically TSLS is an open source project, whereas CSIR OSDD is a crowdsourced project. However, both have enabled high quality research at low cost. The critical success factors appear to be clearly defined entry points, transparency and funding to cover core material costs. PMID:23029588

  17. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  18. Engineering Review Information System

    NASA Technical Reports Server (NTRS)

    Grems, III, Edward G. (Inventor); Henze, James E. (Inventor); Bixby, Jonathan A. (Inventor); Roberts, Mark (Inventor); Mann, Thomas (Inventor)

    2015-01-01

    A disciplinal engineering review computer information system and method by defining a database of disciplinal engineering review process entities for an enterprise engineering program, opening a computer supported engineering item based upon the defined disciplinal engineering review process entities, managing a review of the opened engineering item according to the defined disciplinal engineering review process entities, and closing the opened engineering item according to the opened engineering item review.

  19. Designing Facilities for Collaborative Operations

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana

    2003-01-01

    A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.

  20. Research briefing on contemporary problems in plasma science

    NASA Technical Reports Server (NTRS)

    1991-01-01

    An overview is presented of the broad perspective of all plasma science. Detailed discussions are given of scientific opportunities in various subdisciplines of plasma science. The first subdiscipline to be discussed is the area where the contemporary applications of plasma science are the most widespread, low temperature plasma science. Opportunities for new research and technology development that have emerged as byproducts of research in magnetic and inertial fusion are then highlighted. Then follows a discussion of new opportunities in ultrafast plasma science opened up by recent developments in laser and particle beam technology. Next, research that uses smaller scale facilities is discussed, first discussing non-neutral plasmas, and then the area of basic plasma experiments. Discussions of analytic theory and computational plasma physics and of space and astrophysical plasma physics are then presented.

  1. Planning and Designing School Computer Facilities. Interim Report.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton. Finance and Administration Div.

    This publication provides suggestions and considerations that may be useful for school jurisdictions developing facilities for computers in schools. An interim report for both use and review, it is intended to assist school system planners in clarifying the specifications needed by the architects, other design consultants, and purchasers involved.…

  2. Molecular Modeling and Computational Chemistry at Humboldt State University.

    ERIC Educational Resources Information Center

    Paselk, Richard A.; Zoellner, Robert W.

    2002-01-01

    Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…

  3. Strategy and methodology for rank-ordering Virginia state agencies regarding solar attractiveness and identification of specific project possibilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hewett, R.

    1997-12-31

    This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less

  4. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  5. Computational toxicology using the OpenTox application programming interface and Bioclipse

    PubMed Central

    2011-01-01

    Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173

  6. A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools

    DTIC Science & Technology

    2015-07-14

    computer that establishes an encrypted Virtual Private Network ( OpenVPN [44]) based on the Secure Socket Layer (SSL) paradigm. Each user is given a...security certificate for each device used to connect to the computing nodes. Stable OpenVPN clients are available for Linux, Microsoft Windows, Apple OSX...platform is granted by an encrypted connection base on the Secure Socket Layer (SSL) protocol, and implemented in the OpenVPN Virtual Personal Network

  7. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  8. 36 CFR 1280.92 - When are the Presidential library museums open to the public?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... library museums open to the public? 1280.92 Section 1280.92 Parks, Forests, and Public Property NATIONAL... Use of Facilities in Presidential Libraries? § 1280.92 When are the Presidential library museums open to the public? (a) The hours of operation at Presidential Library museums vary. Please contact the...

  9. Collaborative Calibrated Peer Assessment in Massive Open Online Courses

    ERIC Educational Resources Information Center

    Boudria, Asma; Lafifi, Yacine; Bordjiba, Yamina

    2018-01-01

    The free nature and open access courses in the Massive Open Online Courses (MOOC) allow the facilities of disseminating information for a large number of participants. However, the "massive" propriety can generate many pedagogical problems, such as the assessment of learners, which is considered as the major difficulty facing in the…

  10. 50th Anniversary Open House

    NASA Image and Video Library

    2011-06-02

    Astronaut Scott Altman talks with guests during a 50th Anniversary Open House activity at Stennis Space Center on June 2. Stennis' yearlong anniversary celebration culminates Oct. 25, the anniversary of the day in 1961 that NASA publicly announced plans to build the south Mississippi facility. The June 2 open house attracted more than 1,000 visitors.

  11. A survey of the computer literacy of undergraduate dental students at a University Dental School in Ireland during the academic year 1997-98.

    PubMed

    Ray, N J; Hannigan, A

    1999-05-01

    As dental practice management becomes more computer-based, the efficient functioning of the dentist will become dependent on adequate computer literacy. A survey has been carried out into the computer literacy of a cohort of 140 undergraduate dental students at a University Dental School in Ireland (years 1-5), in the academic year 1997-98. Aspects investigated by anonymous questionnaire were: (1) keyboard skills; (2) computer skills; (3) access to computer facilities; (4) software competencies and (5) use of medical library computer facilities. The students are relatively unfamiliar with basic computer hardware and software: 51.1% considered their expertise with computers as "poor"; 34.3% had taken a formal typewriting or computer keyboarding course; 7.9% had taken a formal computer course at university level and 67.2% were without access to computer facilities at their term-time residences. A majority of students had never used either word-processing, spreadsheet, or graphics programs. Programs relating to "informatics" were more popular, such as literature searching, accessing the Internet and the use of e-mail which represent the major use of the computers in the medical library. The lack of experience with computers may be addressed by including suitable computing courses at the secondary level (age 13-18 years) and/or tertiary level (FE/HE) education programmes. Such training may promote greater use of generic softwares, particularly in the library, with a more electronic-based approach to data handling.

  12. Adolescents' physical activity: competition between perceived neighborhood sport facilities and home media resources.

    PubMed

    Wong, Bonny Yee-Man; Cerin, Ester; Ho, Sai-Yin; Mak, Kwok-Kei; Lo, Wing-Sze; Lam, Tai-Hing

    2010-04-01

    To examine the independent, competing, and interactive effects of perceived availability of specific types of media in the home and neighborhood sport facilities on adolescents' leisure-time physical activity (PA). Survey data from 34 369 students in 42 Hong Kong secondary schools were collected (2006-07). Respondents reported moderate-to-vigorous leisure-time PA, presence of sport facilities in the neighborhood and of media equipment in the home. Being sufficiently physically active was defined as engaging in at least 30 minutes of non-school leisure-time PA on a daily basis. Logistic regression and post-estimation linear combinations of regression coefficients were used to examine the independent and competing effects of sport facilities and media equipment on leisure-time PA. Perceived availability of sport facilities was positively (OR(boys) = 1.17; OR(girls) = 1.26), and that of computer/Internet negatively (OR(boys) = 0.48; OR(girls) = 0.41), associated with being sufficiently active. A significant positive association between video game console and being sufficiently active was found in girls (OR(girls) = 1.19) but not in boys. Compared with adolescents without sport facilities and media equipment, those who reported sport facilities only were more likely to be physically active (OR(boys) = 1.26; OR(girls) = 1.34), while those who additionally reported computer/Internet were less likely to be physically active (OR(boys) = 0.60; OR(girls) = 0.54). Perceived availability of sport facilities in the neighborhood may positively impact on adolescents' level of physical activity. However, having computer/Internet may cancel out the effects of active opportunities in the neighborhood. This suggests that physical activity programs for adolescents need to consider limiting the access to computer-mediated communication as an important intervention component.

  13. Opening Up New Possibilities.

    ERIC Educational Resources Information Center

    Kennedy, Mike

    2001-01-01

    Discusses technology's impact on educational facilities and operations. Technology's influence on a school's ability to streamline their business operations and manage their facilities more efficiency is examined, and how Baylor University (Waco, TX) used technology to cut energy costs is highlighted. (GR)

  14. 75 FR 13259 - Public Telecommunications Facilities Program: New Closing Date

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-19

    ... DEPARTMENT OF COMMERCE National Telecommunications and Information Administration Docket No. 100305127-0127-01 Public Telecommunications Facilities Program: New Closing Date AGENCY: National Telecommunications and Information Administration, U.S. Department of Commerce. ACTION: Notice; to re-open...

  15. Description and operational status of the National Transonic Facility computer complex

    NASA Technical Reports Server (NTRS)

    Boyles, G. B., Jr.

    1986-01-01

    This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.

  16. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  17. Trends in Facility Management Technology: The Emergence of the Internet, GIS, and Facility Assessment Decision Support.

    ERIC Educational Resources Information Center

    Teicholz, Eric

    1997-01-01

    Reports research on trends in computer-aided facilities management using the Internet and geographic information system (GIS) technology for space utilization research. Proposes that facility assessment software holds promise for supporting facility management decision making, and outlines four areas for its use: inventory; evaluation; reporting;…

  18. OpenID Connect as a security service in cloud-based medical imaging systems.

    PubMed

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-04-01

    The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.

  19. Demagnetization Analysis in Excel (DAIE) - An open source workbook in Excel for viewing and analyzing demagnetization data from paleomagnetic discrete samples and u-channels

    NASA Astrophysics Data System (ADS)

    Sagnotti, Leonardo

    2013-04-01

    Modern rock magnetometers and stepwise demagnetization procedures result in the production of large datasets, which need a versatile and fast software for their display and analysis. Various software packages for paleomagnetic analyses have been recently developed to overcome the problems linked to the limited capability and the loss of operability of early codes written in obsolete computer languages and/or platforms, not compatible with modern 64 bit processors. The Demagnetization Analysis in Excel (DAIE) workbook is a new software that has been designed to make the analysis of demagnetization data easy and accessible on an application (Microsoft Excel) widely diffused and available on both the Microsoft Windows and Mac OS X operating systems. The widespread diffusion of Excel should guarantee a long term working life, since compatibility and functionality of current Excel files should be most likely maintained during the development of new processors and operating systems. DAIE is designed for viewing and analyzing stepwise demagnetization data of both discrete and u-channel samples. DAIE consists of a single file and has an open modular structure organized in 10 distinct worksheets. The standard demagnetization diagrams and various parameters of common use are shown on the same worksheet including selectable parameters and user's choices. The remanence characteristic components may be computed by principal component analysis (PCA) on a selected interval of demagnetization steps. Saving of the PCA data can be done both sample by sample, or in automatic by applying the selected choices to all the samples included in the file. The DAIE open structure allows easy personalization, development and improvement. The workbook has the following features which may be valuable for various users: - Operability in nearly all the computers and platforms; - Easy inputs of demagnetization data by "copy and paste" from ASCII files; - Easy export of computed parameters and demagnetization plots; - Complete control of the whole workflow and possibility of implementation of the workbook by any user; - Modular structure in distinct worksheets for each type of analyses and plots, in order to make implementation and personalization easier; - Opportunity to use the workbook for educational purposes, since all the computations and analyses are easily traceable and accessible; - Automatic and fast analysis of a large batch of demagnetization data, such as those measured on u-channel samples. The DAIE workbook and the "User manual" are available for download on a dedicated web site (http://roma2.rm.ingv.it/en/facilities/software/49/daie).

  20. Detail of bricked up storage vault opening Central of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail of bricked up storage vault opening - Central of Georgia Railway, Savannah Repair Shops & Terminal Facilities, Brick Storage Vaults under Jones Street, Bounded by West Broad, Jones, West Boundary & Hull Streets, Savannah, Chatham County, GA

  1. The UARS and open data concept and analysis study. [upper atmosphere

    NASA Technical Reports Server (NTRS)

    Mittal, M.; Nebb, J.; Woodward, H.

    1983-01-01

    Alternative concepts for a common design for the UARS and OPEN Central Data Handling Facility (CDHF) are offered. Costs for alternative implementations of the UARS designs are presented, showing that the system design does not restrict the implementation to a single manufacturer. Processing demands on the alternative UARS CDHF implementations are then discussed. With this information at hand together with estimates for OPEN processing demands, it is shown that any shortfall in system capability for OPEN support can be remedied by either component upgrades or array processing attachments rather than a system redesign. In addition to a common system design, it is shown that there is significant potential for common software design, especially in the areas of data management software and non-user-unique production software. Archiving the CDHF data are discussed. Following that, cost examples for several modes of communications between the CDHF and Remote User Facilities are presented. Technology application is discussed.

  2. Open building and flexibility in healthcare: strategies for shaping spaces for social aspects.

    PubMed

    Capolongo, Stefano; Buffoli, Maddalena; Nachiero, Dario; Tognolo, Chiara; Zanchi, Eleonora; Gola, Marco

    2016-01-01

    The fast development of technology and medicine influences the functioning of healthcare facilities as health promoter for the society, making the flexibility a fundamental requirement. Among the many ways to ensure adaptability, one that allows change without increasing the building's overall size is the Open Building approach. Starting from the analysis of the State-of-the-Art and many case-studies, eight parameters of evaluation were defined, appraising their relative importance through a weighting system defined with several experts. The resulting evaluation tool establishes in what measure healthcare facilities follow the Open Building principles. The tool is tested to ten case-studies, chosen for their flexible features, in order to determine his effectiveness and to identify projects' weaknesses and strengths. The results suggest that many Open Building's principles are already in use but, only through a good design thinking, it will be possible to guarantee architectures for health adaptable for future social challenges.

  3. 17 CFR 190.07 - Calculation of allowed net equity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... computing, with respect to such account, the sum of: (i) The ledger balance; (ii) The open trade balance... purposes of this paragraph (b)(1), the open trade balance of a customer's account shall be computed by... ledger balance or open trade balance of any customer, exclude any security futures products, any gains or...

  4. 17 CFR 190.07 - Calculation of allowed net equity.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... computing, with respect to such account, the sum of: (i) The ledger balance; (ii) The open trade balance... purposes of this paragraph (b)(1), the open trade balance of a customer's account shall be computed by... ledger balance or open trade balance of any customer, exclude any security futures products, any gains or...

  5. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  6. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  7. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities

    PubMed Central

    2010-01-01

    Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787

  8. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities.

    PubMed

    Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E

    2010-05-18

    Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  9. RESIF Seismology Datacentre : Recently Released Data and New Services. Computing with Dense Seisimic Networks Data.

    NASA Astrophysics Data System (ADS)

    Volcke, P.; Pequegnat, C.; Grunberg, M.; Lecointre, A.; Bzeznik, B.; Wolyniec, D.; Engels, F.; Maron, C.; Cheze, J.; Pardo, C.; Saurel, J. M.; André, F.

    2015-12-01

    RESIF is a nationwide french project aimed at building a high quality observation system to observe and understand the inner earth. RESIF deals with permanent seismic networks data as well as mobile networks data, including dense/semi-dense arrays. RESIF project is distributed among different nodes providing qualified data to the main datacentre in Université Grenoble Alpes, France. Data control and qualification is performed by each individual nodes : the poster will provide some insights on RESIF broadband seismic component data quality control. We will then present data that has been recently made publicly available. Data is distributed through worldwide FDSN and european EIDA standards protocols. A new web portal is now opened to explore and download seismic data and metadata. The RESIF datacentre is also now connected to Grenoble University High Performance Computing (HPC) facility : a typical use-case will be presented using iRODS technologies. The use of dense observation networks is increasing, bringing challenges in data growth and handling : we will present an example where HDF5 data format was used as an alternative to usual seismology data formats.

  10. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    NASA Astrophysics Data System (ADS)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  11. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  12. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.

  13. Cost-Minimization Analysis of Open and Endoscopic Carpal Tunnel Release.

    PubMed

    Zhang, Steven; Vora, Molly; Harris, Alex H S; Baker, Laurence; Curtin, Catherine; Kamal, Robin N

    2016-12-07

    Carpal tunnel release is the most common upper-limb surgical procedure performed annually in the U.S. There are 2 surgical methods of carpal tunnel release: open or endoscopic. Currently, there is no clear clinical or economic evidence supporting the use of one procedure over the other. We completed a cost-minimization analysis of open and endoscopic carpal tunnel release, testing the null hypothesis that there is no difference between the procedures in terms of cost. We conducted a retrospective review using a private-payer and Medicare Advantage database composed of 16 million patient records from 2007 to 2014. The cohort consisted of records with an ICD-9 (International Classification of Diseases, Ninth Revision) diagnosis of carpal tunnel syndrome and a CPT (Current Procedural Terminology) code for carpal tunnel release. Payer fees were used to define cost. We also assessed other associated costs of care, including those of electrodiagnostic studies and occupational therapy. Bivariate comparisons were performed using the chi-square test and the Student t test. Data showed that 86% of the patients underwent open carpal tunnel release. Reimbursement fees for endoscopic release were significantly higher than for open release. Facility fees were responsible for most of the difference between the procedures in reimbursement: facility fees averaged $1,884 for endoscopic release compared with $1,080 for open release (p < 0.0001). Endoscopic release also demonstrated significantly higher physician fees than open release (an average of $555 compared with $428; p < 0.0001). Occupational therapy fees associated with endoscopic release were less than those associated with open release (an average of $237 per session compared with $272; p = 0.07). The total average annual reimbursement per patient for endoscopic release (facility, surgeon, and occupational therapy fees) was significantly higher than for open release ($2,602 compared with $1,751; p < 0.0001). Our data showed that the total average fees per patient for endoscopic release were significantly higher than those for open release, although there currently is no strong evidence supporting better clinical outcomes of either technique. Value-based health-care models that favor delivering high-quality care and improving patient health, while also minimizing costs, may favor open carpal tunnel release.

  14. Use of mobile technology-based participatory mapping approaches to geolocate health facility attendees for disease surveillance in low resource settings.

    PubMed

    Fornace, Kimberly M; Surendra, Henry; Abidin, Tommy Rowel; Reyes, Ralph; Macalinao, Maria L M; Stresman, Gillian; Luchavez, Jennifer; Ahmad, Riris A; Supargiyono, Supargiyono; Espino, Fe; Drakeley, Chris J; Cook, Jackie

    2018-06-18

    Identifying fine-scale spatial patterns of disease is essential for effective disease control and elimination programmes. In low resource areas without formal addresses, novel strategies are needed to locate residences of individuals attending health facilities in order to efficiently map disease patterns. We aimed to assess the use of Android tablet-based applications containing high resolution maps to geolocate individual residences, whilst comparing the functionality, usability and cost of three software packages designed to collect spatial information. Using Open Data Kit GeoODK, we designed and piloted an electronic questionnaire for rolling cross sectional surveys of health facility attendees as part of a malaria elimination campaign in two predominantly rural sites in the Rizal, Palawan, the Philippines and Kulon Progo Regency, Yogyakarta, Indonesia. The majority of health workers were able to use the tablets effectively, including locating participant households on electronic maps. For all households sampled (n = 603), health facility workers were able to retrospectively find the participant household using the Global Positioning System (GPS) coordinates and data collected by tablet computers. Median distance between actual house locations and points collected on the tablet was 116 m (IQR 42-368) in Rizal and 493 m (IQR 258-886) in Kulon Progo Regency. Accuracy varied between health facilities and decreased in less populated areas with fewer prominent landmarks. Results demonstrate the utility of this approach to develop real-time high-resolution maps of disease in resource-poor environments. This method provides an attractive approach for quickly obtaining spatial information on individuals presenting at health facilities in resource poor areas where formal addresses are unavailable and internet connectivity is limited. Further research is needed on how to integrate these with other health data management systems and implement in a wider operational context.

  15. The PIRATE Facility: at the crossroads of research and teaching

    NASA Astrophysics Data System (ADS)

    Kolb, U.

    2014-12-01

    I describe the Open University-owned 0.43m robotic observatory PIRATE, based in Mallorca. PIRATE is a cost-effective facility contributing to topical astrophysical research and an inspiring platform for distance education students to learn practical science.

  16. Capacity planning for electronic waste management facilities under uncertainty: multi-objective multi-time-step model development.

    PubMed

    Poonam Khanijo Ahluwalia; Nema, Arvind K

    2011-07-01

    Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).

  17. Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities

    NASA Technical Reports Server (NTRS)

    Volkman, Gregrey T.; Dunn, Steven C.

    2013-01-01

    The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.

  18. Community pharmacists as educators in Danish residential facilities: a qualitative study.

    PubMed

    Mygind, Anna; El-Souri, Mira; Pultz, Kirsten; Rossing, Charlotte; Thomsen, Linda A

    2017-08-01

    To explore experiences with engaging community pharmacists in educational programmes on quality and safety in medication handling in residential facilities for the disabled. A secondary analysis of data from two Danish intervention studies where community pharmacists were engaged in educational programmes. Data included 10 semi-structured interviews with staff, five semi-structured interviews and three open-ended questionnaires with residential facility managers, and five open-ended questionnaires to community pharmacists. Data were thematically coded to identify key points pertaining to the themes 'pharmacists as educators' and 'perceived effects of engaging pharmacists in competence development'. As educators, pharmacists were successful as medicines experts. Some pharmacists experienced pedagogical challenges. Previous teaching experience and obtained knowledge of the local residential facility before teaching often provided sufficient pedagogical skills and tailored teaching to local needs. Effects of engaging community pharmacists included in most instances improved cooperation between residential facilities and community pharmacies through a trustful relationship and improved dialogue about the residents' medication. Other effects included a perception of improved patient safety, teaching skills and branding of the pharmacy. Community pharmacists provide a resource to engage in educational programmes on medication handling in residential facilities, which may facilitate improved cooperation between community pharmacies and residential facilities. However, development of pedagogical competences and understandings of local settings are prerequisites for facilities and pharmacists to experience the programmes as successful. © 2016 Royal Pharmaceutical Society.

  19. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such as the national and regional electricity grid, carbon sequestration, virtual engineering, and the nuclear fuel cycle. The successes of the first five years of SciDAC have demonstrated the power of using advanced computing to enable scientific discovery. One measure of this success could be found in the President’s State of the Union address in which President Bush identified ‘supercomputing’ as a major focus area of the American Competitiveness Initiative. Funds were provided in the FY 2007 President’s Budget request to increase the size of the NERSC-5 procurement to between 100-150 teraflops, to upgrade the LCF Cray XT3 at Oak Ridge to 250 teraflops and acquire a 100 teraflop IBM BlueGene/P to establish the Leadership computing facility at Argonne. We believe that we are on a path to establish a petascale computing resource for open science by 2009. We must develop software tools, packages, and libraries as well as the scientific application software that will scale to hundreds of thousands of processors. Computer scientists from universities and the DOE’s national laboratories will be asked to collaborate on the development of the critical system software components such as compilers, light-weight operating systems and file systems. Standing up these large machines will not be business as usual for ASCR. We intend to develop a series of interconnected projects that identify cost, schedule, risks, and scope for the upgrades at the LCF at Oak Ridge, the establishment of the LCF at Argonne, and the development of the software to support these high-end computers. The critical first step in defining the scope of the project is to identify a set of early application codes for each leadership class computing facility. These codes will have access to the resources during the commissioning phase of the facility projects and will be part of the acceptance tests for the machines. Applications will be selected, in part, by breakthrough science, scalability, and ability to exercise key hardware and software components. Possible early applications might include climate models; studies of the magnetic properties of nanoparticles as they relate to ultra-high density storage media; the rational design of chemical catalysts, the modeling of combustion processes that will lead to cleaner burning coal, and fusion and astrophysics research. I have presented just a few of the challenges that we look forward to on the road to petascale computing. Our road to petascale science might be paraphrased by the quote from e e cummings, ‘somewhere I have never traveled, gladly beyond any experience . . .’

  20. Patterns and determinants of communal latrine usage in urban poverty pockets in Bhopal, India.

    PubMed

    Biran, A; Jenkins, M W; Dabrase, P; Bhagwat, I

    2011-07-01

    To explore and explain patterns of use of communal latrine facilities in urban poverty pockets. Six poverty pockets with communal latrine facilities representing two management models (Sulabh and municipal) were selected. Sampling was random and stratified by poverty pocket population size. A seventh, community-managed facility was also included. Data were collected by exit interviews with facility users and by interviews with residents from a randomly selected representative sample of poverty pocket households, on social, economic and demographic characteristics of households, latrine ownership, defecation practices, costs of using the facility and distance from the house to the facility. A tally of facility users was kept for 1 day at each facility. Data were analysed using logistic regression modelling to identify determinants of communal latrine usage. Communal latrines differed in their facilities, conditions, management and operating characteristics, and rates of usage. Reported usage rates among non-latrine-owning households ranged from 15% to 100%. There was significant variation in wealth, occupation and household structure across the poverty pockets as well as in household latrine ownership. Households in pockets with municipal communal latrine facilities appeared poorer. Households in pockets with Sulabh-managed communal facilities were significantly more likely to own a household latrine. Determinants of communal facility usage among households without a latrine were access and convenience (distance and opening hours), facility age, cleanliness/upkeep and cost. The ratio of male to female users was 2:1 across all facilities for both adults and children. Provision of communal facilities reduces but does not end the problem of open defecation in poverty pockets. Women appear to be relatively poorly served by communal facilities and, cost is a barrier to use by poorer households. Results suggest improving facility convenience and access and modifying fee structures could lead to increased rates of usage. Attention to possible barriers to usage at household level associated particularly with having school-age children and with pre-school childcare needs may also be warranted. © 2011 Blackwell Publishing Ltd.

  1. Computer program modifications of Open-file report 82-1065; a comprehensive system for interpreting seismic-refraction and arrival-time data using interactive computer methods

    USGS Publications Warehouse

    Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.

    1983-01-01

    The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.

  2. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2018-02-07

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  3. {open_quotes}Airborne Research Australia (ARA){close_quotes} a new research aircraft facility on the southern hemisphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hacker, J.M.

    1996-11-01

    {open_quotes}Airborne Research Australia{close_quotes} (ARA) is a new research aircraft facility in Australia. It will serve the scientific community of Australia and will also make its aircraft and expertise available for commercial users. To cover the widest possible range of applications, the facility will operate up to five research aircraft, from a small, low-cost platform to medium-sized multi-purpose aircraft, as well as a unique high altitude aircraft capable of carrying scientific loads to altitudes of up to 15km. The aircraft will be equipped with basic instrumentation and data systems, as well as facilities to mount user-supplied instrumentation and systems internally andmore » externally on the aircraft. The ARA operations base consisting of a hangar, workshops, offices, laboratories, etc. is currently being constructed at Parafield Airport near Adelaide/South Australia. The following text reports about the current state of development of the facility. An update will be given in a presentation at the Conference. 6 figs.« less

  4. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  5. TIRES, OPEN BURNING

    EPA Science Inventory

    The chapter describes available information on the health effects from open burning of rubber tires. It concentrates on the three known sources of detailed measurements: (1) a small-scale emissions characterization study performed by the U.S. EPA in a facility designed to simulat...

  6. Atmospheric ammonia mixing ratios at an open-air cattle feeding facility.

    PubMed

    Hiranuma, Naruki; Brooks, Sarah D; Thornton, Daniel C O; Auvermann, Brent W

    2010-02-01

    Mixing ratios of total and gaseous ammonia were measured at an open-air cattle feeding facility in the Texas Panhandle in the summers of 2007 and 2008. Samples were collected at the nominally upwind and downwind edges of the facility. In 2008, a series of far-field samples was also collected 3.5 km north of the facility. Ammonium concentrations were determined by two complementary laboratory methods, a novel application of visible spectrophotometry and standard ion chromatography (IC). Results of the two techniques agreed very well, and spectrophotometry is faster, easier, and cheaper than chromatography. Ammonia mixing ratios measured at the immediate downwind site were drastically higher (approximately 2900 parts per billion by volume [ppbv]) than thos measured at the upwind site (< or = 200 ppbv). In contrast, at 3.5 km away from the facility, ammonia mixing ratios were reduced to levels similar to the upwind site (< or = 200 ppbv). In addition, PM10 (particulate matter < 10 microm in optical diameter) concentrations obtained at each sampling location using Grimm portable aerosol spectrometers are reported. Time-averaged (1-hr) volume concentrations of PM10 approached 5 x 10(12) nm3 cm(-3). Emitted ammonia remained largely in the gas phase at the downwind and far-field locations. No clear correlation between concentrations of ammonia and particles was observed. Overall, this study provides a better understanding of ammonia emissions from open-air animal feeding operations, especially under the hot and dry conditions present during these measurements.

  7. CSNS computing environment Based on OpenStack

    NASA Astrophysics Data System (ADS)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  8. THE EFFECTS OF COMPUTER-BASED FIRE SAFETY TRAINING ON THE KNOWLEDGE, ATTITUDES, AND PRACTICES OF CAREGIVERS

    PubMed Central

    Harrington, Susan S.; Walker, Bonnie L.

    2010-01-01

    Background Older adults in small residential board and care facilities are at a particularly high risk of fire death and injury because of their characteristics and environment. Methods The authors investigated computer-based instruction as a way to teach fire emergency planning to owners, operators, and staff of small residential board and care facilities. Participants (N = 59) were randomly assigned to a treatment or control group. Results Study participants who completed the training significantly improved their scores from pre- to posttest when compared to a control group. Participants indicated on the course evaluation that the computers were easy to use for training (97%) and that they would like to use computers for future training courses (97%). Conclusions This study demonstrates the potential for using interactive computer-based training as a viable alternative to instructor-led training to meet the fire safety training needs of owners, operators, and staff of small board and care facilities for the elderly. PMID:19263929

  9. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  10. 75 FR 13258 - Announcing a Meeting of the Information Security and Privacy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-19

    .../index.html/ . Agenda: --Cloud Computing Implementations --Health IT --OpenID --Pending Cyber Security... will be available for the public and media. --OpenID --Cloud Computing Implementations --Security...

  11. FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment

    NASA Astrophysics Data System (ADS)

    Loewe, P.; Klump, J.; Thaler, J.

    2012-12-01

    High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.

  12. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  13. Leaderboard Now Open: CPTAC’s DREAM Proteogenomics Computational Challenge | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the opening of the leaderboard to its Proteogenomics Computational DREAM Challenge. The leadership board remains open for submissions during September 25, 2017 through October 8, 2017, with the Challenge expected to run until November 17, 2017.

  14. 12 CFR Optional Annual Percentage... - End Plans Subject to the Requirements of § 226.5b

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....5b Annual Optional Annual Percentage Rate Computations for Creditors Offering Open Banks and Banking... LENDING (REGULATION Z) Special Rules Applicable to Credit Card Accounts and Open-End Credit Offered to... Computations for Creditors Offering Open-End Plans Subject to the Requirements of § 226.5b In determining the...

  15. EVALUATION OF STYRENE EMISSIONS FROM A SHOWER STALL/BATHTUB MANUFACTURING FACILITY

    EPA Science Inventory

    The report gives results of emissions measurements carried out at a representative facility (Eljer Plumbingware in Wilson, NC) that manufactures polyester-resin-reinforced shower stalls and bathtubs by spraying styrene-based resins onto molds in vented, open, spray booths. Styren...

  16. The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.

    ERIC Educational Resources Information Center

    Lach, Ivan J.

    The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…

  17. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.

    PubMed

    Brodish, D L

    1998-01-01

    The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.

  19. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  20. Remote sensing and field test capabilities at U.S. Army Dugway Proving Ground

    NASA Astrophysics Data System (ADS)

    Pearson, James T.; Herron, Joshua P.; Marshall, Martin S.

    2011-11-01

    U.S. Army Dugway Proving Ground (DPG) is a Major Range and Test Facility Base (MRTFB) with the mission of testing chemical and biological defense systems and materials. DPG facilities include state-of-the-art laboratories, extensive test grids, controlled environment calibration facilities, and a variety of referee instruments for required test measurements. Among these referee instruments, DPG has built up a significant remote sensing capability for both chemical and biological detection. Technologies employed for remote sensing include FTIR spectroscopy, UV spectroscopy, Raman-shifted eye-safe lidar, and other elastic backscatter lidar systems. These systems provide referee data for bio-simulants, chemical simulants, toxic industrial chemicals (TICs), and toxic industrial materials (TIMs). In order to realize a successful large scale open-air test, each type of system requires calibration and characterization. DPG has developed specific calibration facilities to meet this need. These facilities are the Joint Ambient Breeze Tunnel (JABT), and the Active Standoff Chamber (ASC). The JABT and ASC are open ended controlled environment tunnels. Each includes validation instrumentation to characterize simulants that are disseminated. Standoff systems are positioned at typical field test distances to measure characterized simulants within the tunnel. Data from different types of systems can be easily correlated using this method, making later open air test results more meaningful. DPG has a variety of large scale test grids available for field tests. After and during testing, data from the various referee instruments is provided in a visual format to more easily draw conclusions on the results. This presentation provides an overview of DPG's standoff testing facilities and capabilities, as well as example data from different test scenarios.

  1. Remote sensing and field test capabilities at U.S. Army Dugway Proving Ground

    NASA Astrophysics Data System (ADS)

    Pearson, James T.; Herron, Joshua P.; Marshall, Martin S.

    2012-05-01

    U.S. Army Dugway Proving Ground (DPG) is a Major Range and Test Facility Base (MRTFB) with the mission of testing chemical and biological defense systems and materials. DPG facilities include state-of-the-art laboratories, extensive test grids, controlled environment calibration facilities, and a variety of referee instruments for required test measurements. Among these referee instruments, DPG has built up a significant remote sensing capability for both chemical and biological detection. Technologies employed for remote sensing include FTIR spectroscopy, UV spectroscopy, Raman-shifted eye-safe lidar, and other elastic backscatter lidar systems. These systems provide referee data for bio-simulants, chemical simulants, toxic industrial chemicals (TICs), and toxic industrial materials (TIMs). In order to realize a successful large scale open-air test, each type of system requires calibration and characterization. DPG has developed specific calibration facilities to meet this need. These facilities are the Joint Ambient Breeze Tunnel (JABT), and the Active Standoff Chamber (ASC). The JABT and ASC are open ended controlled environment tunnels. Each includes validation instrumentation to characterize simulants that are disseminated. Standoff systems are positioned at typical field test distances to measure characterized simulants within the tunnel. Data from different types of systems can be easily correlated using this method, making later open air test results more meaningful. DPG has a variety of large scale test grids available for field tests. After and during testing, data from the various referee instruments is provided in a visual format to more easily draw conclusions on the results. This presentation provides an overview of DPG's standoff testing facilities and capabilities, as well as example data from different test scenarios.

  2. Getting started with open-hardware: development and control of microfluidic devices.

    PubMed

    da Costa, Eric Tavares; Mora, Maria F; Willis, Peter A; do Lago, Claudimir L; Jiao, Hong; Garcia, Carlos D

    2014-08-01

    Understanding basic concepts of electronics and computer programming allows researchers to get the most out of the equipment found in their laboratories. Although a number of platforms have been specifically designed for the general public and are supported by a vast array of on-line tutorials, this subject is not normally included in university chemistry curricula. Aiming to provide the basic concepts of hardware and software, this article is focused on the design and use of a simple module to control a series of PDMS-based valves. The module is based on a low-cost microprocessor (Teensy) and open-source software (Arduino). The microvalves were fabricated using thin sheets of PDMS and patterned using CO2 laser engraving, providing a simple and efficient way to fabricate devices without the traditional photolithographic process or facilities. Synchronization of valve control enabled the development of two simple devices to perform injection (1.6 ± 0.4 μL/stroke) and mixing of different solutions. Furthermore, a practical demonstration of the utility of this system for microscale chemical sample handling and analysis was achieved performing an on-chip acid-base titration, followed by conductivity detection with an open-source low-cost detection system. Overall, the system provided a very reproducible (98%) platform to perform fluid delivery at the microfluidic scale. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. 47 CFR 76.1500 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1500 Definitions. (a) Open video system. A facility... that is designed to provide cable service which includes video programming and which is provided to...

  4. 47 CFR 76.1500 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1500 Definitions. (a) Open video system. A facility... that is designed to provide cable service which includes video programming and which is provided to...

  5. Computer-Assisted School Facility Planning with ONPASS.

    ERIC Educational Resources Information Center

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  6. Technology in the Service of Creativity: Computer Assisted Writing Project--Stetson Middle School, Philadelphia, Pennsylvania. Final Report.

    ERIC Educational Resources Information Center

    Bender, Evelyn

    The American Library Association's Carroll Preston Baber Research Award supported this project on the use, impact and feasibility of a computer assisted writing facility located in the library of Stetson Middle School in Philadelphia, an inner city school with a population of minority, "at risk" students. The writing facility consisted…

  7. Sigma 2 Graphic Display Software Program Description

    NASA Technical Reports Server (NTRS)

    Johnson, B. T.

    1973-01-01

    A general purpose, user oriented graphic support package was implemented. A comprehensive description of the two software components comprising this package is given: Display Librarian and Display Controller. These programs have been implemented in FORTRAN on the XDS Sigma 2 Computer Facility. This facility consists of an XDS Sigma 2 general purpose computer coupled to a Computek Display Terminal.

  8. OpenID Connect as a security service in cloud-based medical imaging systems

    PubMed Central

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-01-01

    Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682

  9. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  10. In-plant management of hazardous waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, M.W.; Howell, W.L. Jr.

    1995-12-31

    One of the earliest sustainable technologies for the management of hazardous industrial wastes, and one of the most successful, is {open_quotes}In-Plant Control{close_quotes} Waste elimination, reuse and/or minimization can encourage improved utilization of resources, decreased environmental degradation and increased profits at individual industrial product ion sites, or within an industry. For new facilities and industries, putting such programs in place is relatively easy. Experience has shown, however, that this may be more difficult to initiate in existing facilities, especially in older and heavier industries. This task can be made easier by promoting a mutually respectful partnership between production and environmental interestsmore » within the facility or industry. This permits {open_quotes}common sense{close_quotes} thinking and a cooperative, proactive strategy for securing an appropriate balance between economic growth, environmental protection and social responsibility. Case studies are presented wherein a phased, incremental in-plant system for waste management was developed and employed to good effect, using a model that entailed {open_quotes}Consciousness, Commitment, Training, Recognition, Re-engineering and Continuous Improvement{close_quotes} to promote waste minimization or elimination.« less

  11. Interim Status Closure Plan Open Burning Treatment Unit Technical Area 16-399 Burn Tray

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil-Holterman, Luciana R.

    2012-05-07

    This closure plan describes the activities necessary to close one of the interim status hazardous waste open burning treatment units at Technical Area (TA) 16 at the Los Alamos National Laboratory (LANL or the Facility), hereinafter referred to as the 'TA-16-399 Burn Tray' or 'the unit'. The information provided in this closure plan addresses the closure requirements specified in the Code of Federal Regulations (CFR), Title 40, Part 265, Subparts G and P for the thermal treatment units operated at the Facility under the Resource Conservation and Recovery Act (RCRA) and the New Mexico Hazardous Waste Act. Closure of themore » open burning treatment unit will be completed in accordance with Section 4.1 of this closure plan.« less

  12. 77 FR 40891 - Towing Safety Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-11

    ... ``Recommendations for Safety Standards of Portable Facility Vapor Control Systems.'' (4) Period for public comment... teleconference to review and discuss a new Task Statement titled ``Recommendations for Safety Standards of Portable Facility Vapor Control Systems'' and to discuss the progress of open Task Statements. This meeting...

  13. Some propulsion system noise data handling conventions and computer programs used at the Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Montegani, F. J.

    1974-01-01

    Methods of handling one-third-octave band noise data originating from the outdoor full-scale fan noise facility and the engine acoustic facility at the Lewis Research Center are presented. Procedures for standardizing, retrieving, extrapolating, and reporting these data are explained. Computer programs are given which are used to accomplish these and other noise data analysis tasks. This information is useful as background for interpretation of data from these facilities appearing in NASA reports and can aid data exchange by promoting standardization.

  14. Fairbanks International Airport, Transportation & Public Facilities, State

    Science.gov Websites

    , Alaska 99709 Phone: (907) 474-2500 Fax: (907) 474-2513 Email FAI Hours of Operation: FAI is open 24 hours a day. Passenger screening checkpoints are open: 4:00 a.m. to 2:00 a.m. daily, including holidays PDF document FAI to Host Open House at La Quinta Inn & Suites PDF document FAI Helps Combat Opioid

  15. Fast laboratory-based micro-computed tomography for pore-scale research: Illustrative experiments and perspectives on the future

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-09-01

    Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.

  16. Opportunities for Energy Efficiency and Automated Demand Response in Industrial Refrigerated Warehouses in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekov, Alex; Thompson, Lisa; McKane, Aimee

    2009-05-11

    This report summarizes the Lawrence Berkeley National Laboratory's research to date in characterizing energy efficiency and open automated demand response opportunities for industrial refrigerated warehouses in California. The report describes refrigerated warehouses characteristics, energy use and demand, and control systems. It also discusses energy efficiency and open automated demand response opportunities and provides analysis results from three demand response studies. In addition, several energy efficiency, load management, and demand response case studies are provided for refrigerated warehouses. This study shows that refrigerated warehouses can be excellent candidates for open automated demand response and that facilities which have implemented energy efficiencymore » measures and have centralized control systems are well-suited to shift or shed electrical loads in response to financial incentives, utility bill savings, and/or opportunities to enhance reliability of service. Control technologies installed for energy efficiency and load management purposes can often be adapted for open automated demand response (OpenADR) at little additional cost. These improved controls may prepare facilities to be more receptive to OpenADR due to both increased confidence in the opportunities for controlling energy cost/use and access to the real-time data.« less

  17. What Attracts People to Visit Community Open Spaces? A Case Study of the Overseas Chinese Town Community in Shenzhen, China

    PubMed Central

    Chen, Yiyong; Liu, Tao; Xie, Xiaohuan; Marušić, Barbara Goličnik

    2016-01-01

    A well-designed open space that encourages outdoor activity and social communication is a community asset that could potentially contribute to the health of local residents and social harmony of the community. Numerous factors may influence the use of each single space and may result in a variety of visitors. Compared with previous studies that focused on accessibility, this study highlights the relationship between the utilization and characteristics of community open spaces in China. The Overseas Chinese Town community in Shenzhen is regarded as an example. The association between the number of visitors and space characteristics is examined with multivariate regression models. Results show that large areas with accessible lawns, well-maintained footpaths, seats, commercial facilities, and water landscapes are important characteristics that could increase the use of community open spaces. However, adding green vegetation, sculptures, and landscape accessories in open spaces has limited effects on increasing the outdoor activities of residents. Thus, to increase the use of community open spaces, landscape designers should focus more on creating user-oriented spaces with facilities that encourage active use than on improving ornamental vegetation and accessories. PMID:27367713

  18. Open Source Molecular Modeling

    PubMed Central

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-01-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126

  19. Opening Up "Open Systems": Moving toward True Interoperability among Library Software. DataResearch Automation Guide Series, Number One.

    ERIC Educational Resources Information Center

    Data Research Associates, Inc., St. Louis, MO.

    The topic of open systems as it relates to the needs of libraries to establish interoperability between dissimilar computer systems can be clarified by an understanding of the background and evolution of the issue. The International Standards Organization developed a model to link dissimilar computers, and this model has evolved into consensus…

  20. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  1. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  2. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  3. Swirl, Expansion Ratio and Blockage Effects on Confined Turbulent Flow. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Scharrer, G. L.

    1982-01-01

    A confined jet test facility, a swirles, flow visualization equipment, five-hole pitot probe instrumentation; flow visualization; and effects of swirl on open-ended flows, of gradual expansion on open-ended flows, and blockages of flows are addressed.

  4. 7 CFR 1948.53 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... (j) Fair market value. The price at which a property will sell in the open market allowing a... or instrumentality thereof. (q) Public facilities. Installations open to the public and used for the... areas, sewer plants, water plants, community centers, libraries, city or town halls, jailhouses...

  5. 7 CFR 1948.53 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... (j) Fair market value. The price at which a property will sell in the open market allowing a... or instrumentality thereof. (q) Public facilities. Installations open to the public and used for the... areas, sewer plants, water plants, community centers, libraries, city or town halls, jailhouses...

  6. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1999-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  7. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1998-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  8. Global Dynamic Exposure and the OpenBuildingMap

    NASA Astrophysics Data System (ADS)

    Schorlemmer, D.; Beutin, T.; Hirata, N.; Hao, K. X.; Wyss, M.; Cotton, F.; Prehn, K.

    2015-12-01

    Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for us. More than 2.5 billion geographical nodes, more than 150 million building footprints (growing by ~100'000 per day), and a plethora of information about school, hospital, and other critical facility locations allow us to exploit this dataset for risk-related computations. We will harvest this dataset by collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. With this approach, we can increase the resolution of existing exposure models from fragility classes distribution via block-by-block specifications to building-by-building vulnerability. To increase coverage, we will provide a framework for collecting building data by any person or community. We will implement a double crowd-sourced approach to bring together the interest and enthusiasm of communities with the knowledge of earthquake and engineering experts. The first crowd-sourced approach aims at collecting building properties in a community by local people and activists. This will be supported by tailored building capture tools for mobile devices for simple and fast building property capturing. The second crowd-sourced approach involves local experts in estimating building vulnerability that will provide building classification rules that translate building properties into vulnerability and exposure indicators as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM). These indicators will then be combined with a hazard model using the GEM OpenQuake engine to compute a risk model. The free/open framework we will provide can be used on commodity hardware for local to regional exposure capturing and for communities to understand their earthquake risk.

  9. [Improving experimental teaching facilities and opening up of laboratories in order to raise experimental teaching quality of genetics].

    PubMed

    Xiao, Jian-Fu; Wu, Jian-Guo; Shi, Chun-Hai

    2011-12-01

    Advanced teaching facilities and the policy of opening laboratories to students play an important role in raising the quality in the experimental teaching of Genetics. This article introduces the superiority of some advanced instruments and equipment (such as digital microscope mutual laboratory system, flow cytometry, and NIRSystems) in the experimental teaching of genetics, and illustrates with examples the significance of exposing students to experiments in developing their creative consciousness and creative ability. This article also offers some new concepts on the further improvement upon teaching in the laboratory.

  10. Nonequilibrium Supersonic Freestream Studied Using Coherent Anti-Stokes Raman Spectroscopy

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Cantu, Luca M.; Gallo, Emanuela C. A.; Baurle, Rob; Danehy, Paul M.; Rockwell, Robert; Goyne, Christopher; McDaniel, Jim

    2015-01-01

    Measurements were conducted at the University of Virginia Supersonic Combustion Facility of the flow in a constant-area duct downstream of a Mach 2 nozzle. The airflow was heated to approximately 1200 K in the facility heater upstream of the nozzle. Dual-pump coherent anti-Stokes Raman spectroscopy was used to measure the rotational and vibrational temperatures of N2 and O2 at two planes in the duct. The expectation was that the vibrational temperature would be in equilibrium, because most scramjet facilities are vitiated air facilities and are in vibrational equilibrium. However, with a flow of clean air, the vibrational temperature of N2 along a streamline remains approximately constant between the measurement plane and the facility heater, the vibrational temperature of O2 in the duct is about 1000 K, and the rotational temperature is consistent with the isentropic flow. The measurements of N2 vibrational temperature enabled cross-stream nonuniformities in the temperature exiting the facility heater to be documented. The measurements are in agreement with computational fluid dynamics models employing separate lumped vibrational and translational/rotational temperatures. Measurements and computations are also reported for a few percent steam addition to the air. The effect of the steam is to bring the flow to thermal equilibrium, also in agreement with the computational fluid dynamics.

  11. Innovative Capital Planning

    ERIC Educational Resources Information Center

    McIntyre, Chuck

    2003-01-01

    Community college strategic planning is becoming more learning-centered, grounded in the student experience, and open to change. As a result, facility planners are challenged to embody these notions in a college's strategic delivery plan: the systems and facilities needed to accomplish its mission and vision. This article proposes a new process…

  12. 40 CFR 256.23 - Requirements for closing or upgrading open dumps.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) SOLID WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid... classification of existing solid waste disposal facilities according to the criteria. This classification shall... solid waste disposal facility; (2) The availability of State regulatory and enforcement powers; and (3...

  13. 40 CFR 256.23 - Requirements for closing or upgrading open dumps.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) SOLID WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid... classification of existing solid waste disposal facilities according to the criteria. This classification shall... solid waste disposal facility; (2) The availability of State regulatory and enforcement powers; and (3...

  14. A performance goal-based seismic design philosophy for waste repository facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, Q.A.

    1994-12-31

    A performance goal-based seismic design philosophy, compatible with DOE`s present natural phenomena hazards mitigation and {open_quotes}graded approach{close_quotes} philosophy, has been proposed for high level nuclear waste repository facilities. The rationale, evolution, and the desirable features of this method have been described. Why and how the method should and can be applied to the design of a repository facility are also discussed.

  15. An engine awaits processing in the new engine shop at KSC

    NASA Technical Reports Server (NTRS)

    1998-01-01

    A new Block 2A engine awaits processing in the low bay of the Space Shuttle Main Engine Processing Facility (SSMEPF). Officially opened on July 6, the new facility replaces the Shuttle Main Engine Shop. The SSMEPF is an addition to the existing Orbiter Processing Facility Bay 3. The engine is scheduled to fly on the Space Shuttle Endeavour during the STS-88 mission in December 1998.

  16. (NTF) National Transonic Facility Test 213-SFW Flow Control II,

    NASA Image and Video Library

    2012-11-19

    (NTF) National Transonic Facility Test 213-SFW Flow Control II, Fast-MAC Model: The fundamental Aerodynamics Subsonic Transonic-Modular Active Control (Fast-MAC) Model was tested for the 2nd time in the NTF. The objectives were to document the effects of Reynolds numbers on circulation control aerodynamics and to develop and open data set for CFD code validation. Image taken in building 1236, National Transonic Facility

  17. JESS facility modification and environmental/power plans

    NASA Technical Reports Server (NTRS)

    Bordeaux, T. A.

    1984-01-01

    Preliminary plans for facility modifications and environmental/power systems for the JESS (Joint Exercise Support System) computer laboratory and Freedom Hall are presented. Blueprints are provided for each of the facilities and an estimate of the air conditioning requirements is given.

  18. Ergonomic and Anthropometric Considerations of the Use of Computers in Schools by Adolescents

    ERIC Educational Resources Information Center

    Jermolajew, Anna M.; Newhouse, C. Paul

    2003-01-01

    Over the past decade there has been an explosion in the provision of computing facilities in schools for student use. However, there is concern that the development of these facilities has often given little regard to the ergonomics of the design for use by children, particularly adolescents. This paper reports on a study that investigated the…

  19. 47 CFR 73.208 - Reference points and distance computations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...

  20. 47 CFR 73.208 - Reference points and distance computations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...

  1. 117. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  2. 122. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  3. 118. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  4. 121. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  5. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor thatmore » uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to Date (YTD) that unless otherwise specified, denotes January 1, 2011 through June 30, 2011. User Support remains an important element of the OLCF operations, with the philosophy 'whatever it takes' to enable successful research. Impact of this center-wide activity is reflected by the user survey results that show users are 'very satisfied.' The OLCF continues to aggressively pursue outreach and training activities to promote awareness - and effective use - of U.S. leadership-class resources (Reference Section 2). The OLCF continues to meet and in many cases exceed DOE metrics for capability usage (35% target in CY 2010, delivered 39%; 40% target in CY 2011, 54% January 1, 2011 through June 30, 2011). The Schedule Availability (SA) and Overall Availability (OA) for Jaguar were exceeded in CY2010. Given the solution to the VRM problem the SA and OA for Jaguar in CY 2011 are expected to exceed the target metrics of 95% and 90%, respectively (Reference Section 3). Numerous and wide-ranging research accomplishments, scientific support, and technological innovations are more fully described in Sections 4 and 6 and reflect OLCF leadership in enabling high-impact science solutions and vision in creating an exascale-ready center. Financial Management (Section 5) and Risk Management (Section 7) are carried out using best practices approved of by DOE. The OLCF has a valid cyber security plan and Authority to Operate (Section 8). The proposed metrics for 2012 are reflected in Section 9.« less

  6. Making Cloud Computing Available For Researchers and Innovators (Invited)

    NASA Astrophysics Data System (ADS)

    Winsor, R.

    2010-12-01

    High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.

  7. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  8. A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.

    2010-12-01

    High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to our end-users (such as generation of custom DEMs via various gridding algorithms, and hydrological modeling algorithms). In the future, the SOA will enable direct authenticated access to back-end functionality through simple Web service Application Programming Interfaces (APIs), so that users may access our data and compute resources via clients other than Web browsers. In addition to an overview of the OpenTopography SOA, this presentation will discuss our recently developed lidar data ingestion and management system for point cloud data delivered in the binary LAS standard. This system compliments our existing partitioned database approach for data delivered in ASCII format, and permits rapid ingestion of data. The system has significantly reduced data ingestion times and has implications for data distribution in emergency response situations. We will also address on ongoing work to develop a community lidar metadata catalog based on the OGC Catalogue Service for Web (CSW) standard, which will help to centralize discovery of public domain lidar data.

  9. The development of the Canadian Mobile Servicing System Kinematic Simulation Facility

    NASA Technical Reports Server (NTRS)

    Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.

    1989-01-01

    Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.

  10. Open source system OpenVPN in a function of Virtual Private Network

    NASA Astrophysics Data System (ADS)

    Skendzic, A.; Kovacic, B.

    2017-05-01

    Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.

  11. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  12. 12 CFR Appendix G to Part 1026 - Open-End Model Forms and Clauses

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Open-End Model Forms and Clauses G Appendix G...) Pt. 1026, App. G Appendix G to Part 1026—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 1026.6 and 1026.7) G-1(A)Balance Computation Methods Model...

  13. 12 CFR Appendix G to Part 1026 - Open-End Model Forms and Clauses

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Open-End Model Forms and Clauses G Appendix G...) Pt. 1026, App. G Appendix G to Part 1026—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 1026.6 and 1026.7) G-1(A)Balance Computation Methods Model...

  14. 12 CFR Appendix G to Part 1026 - Open-End Model Forms and Clauses

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 9 2014-01-01 2014-01-01 false Open-End Model Forms and Clauses G Appendix G...) Pt. 1026, App. G Appendix G to Part 1026—Open-End Model Forms and Clauses G-1Balance Computation Methods Model Clauses (Home-equity Plans) (§§ 1026.6 and 1026.7) G-1(A)Balance Computation Methods Model...

  15. Innovative Technology in Automotive Technology

    ERIC Educational Resources Information Center

    Gardner, John

    2007-01-01

    Automotive Technology combines hands-on training along with a fully integrated, interactive, computerized multistationed facility. Our program is a competency based, true open-entry/open-exit program that utilizes flexible self-paced course outlines. It is designed around an industry partnership that promotes community and economic development,…

  16. Public computing options for individuals with cognitive impairments: survey outcomes.

    PubMed

    Fox, Lynn Elizabeth; Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Prideaux, Jason

    2009-09-01

    To examine availability and accessibility of public computing for individuals with cognitive impairment (CI) who reside in the USA. A telephone survey was administered as a semi-structured interview to 145 informants representing seven types of public facilities across three geographically distinct regions using a snowball sampling technique. An Internet search of wireless (Wi-Fi) hotspots supplemented the survey. Survey results showed the availability of public computer terminals and Internet hotspots was greatest in the urban sample, followed by the mid-sized and rural cities. Across seven facility types surveyed, libraries had the highest percentage of access barriers, including complex queue procedures, login and password requirements, and limited technical support. University assistive technology centres and facilities with a restricted user policy, such as brain injury centres, had the lowest incidence of access barriers. Findings suggest optimal outcomes for people with CI will result from a careful match of technology and the user that takes into account potential barriers and opportunities to computing in an individual's preferred public environments. Trends in public computing, including the emergence of widespread Wi-Fi and limited access to terminals that permit auto-launch applications, should guide development of technology designed for use in public computing environments.

  17. Linked Data: Forming Partnerships at the Data Layer

    NASA Astrophysics Data System (ADS)

    Shepherd, A.; Chandler, C. L.; Arko, R. A.; Jones, M. B.; Hitzler, P.; Janowicz, K.; Krisnadhi, A.; Schildhauer, M.; Fils, D.; Narock, T.; Groman, R. C.; O'Brien, M.; Patton, E. W.; Kinkade, D.; Rauch, S.

    2015-12-01

    The challenges presented by big data are straining data management software architectures of the past. For smaller existing data facilities, the technical refactoring of software layers become costly to scale across the big data landscape. In response to these challenges, data facilities will need partnerships with external entities for improved solutions to perform tasks such as data cataloging, discovery and reuse, and data integration and processing with provenance. At its surface, the concept of linked open data suggests an uncalculated altruism. Yet, in his concept of five star open data, Tim Berners-Lee explains the strategic costs and benefits of deploying linked open data from the perspective of its consumer and producer - a data partnership. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) addresses some of the emerging needs of its research community by partnering with groups doing complementary work and linking their respective data layers using linked open data principles. Examples will show how these links, explicit manifestations of partnerships, reduce technical debt and provide a swift flexibility for future considerations.

  18. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  19. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  20. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.

    PubMed

    Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.

  1. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid

    PubMed Central

    Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617

  2. Sustainable data policy for a data production facility: a work in (continual) progress

    NASA Astrophysics Data System (ADS)

    Ketcham, R. A.

    2017-12-01

    The University of Texas High-Resolution X-Ray Computed Tomography Facility (UTCT) has been producing volumetric data and data products of geological and other scientific specimens and engineering materials for over 20 years. Data volumes, both in terms of the size of individual data sets and overall facility production, have progressively grown and fluctuated near the upper boundary of what can be managed by contemporary workstations and lab-scale servers and network infrastructure, making data policy a preoccupation for our entire history. Although all projects have been archived since our first day of operation, policies on which data to keep (raw, reconstructed after corrections, processed) have varied, and been periodically revisited in consideration of the cost of curation and the likelihood of revisiting and reprocessing data when better techniques become available, such as improved artifact corrections or iterative tomographic reconstruction. Advances in instrumentation regularly make old data obsolete and more advantageous to reacquire, but the simple act of getting a sample to a scanning facility is a practical barrier that cannot be overlooked. In our experience, the main times that raw data have been revisited using improved processing to improve image quality were predictable, high-impact charismatic projects (e.g., Archaeopteryx, A. Afarensis "Lucy"). These cases actually provided the impetus for development of the new techniques (ring and beam hardening artifact reduction), which were subsequently incorporated into our data processing pipeline going forward but were rarely if ever retroactively applied to earlier data sets. The only other times raw data have been reprocessed were when reconstruction parameters were inappropriate, due to unnoticed sample features or human error, which are usually recognized fairly quickly. The optimal data retention policy thus remains an open question, although erring on the side of caution remains the default position.

  3. Use of medical technologies in rehabilitation medicine settings in Israel: results of the TECHNO-R 2005 survey.

    PubMed

    Ring, Haim; Keren, Ofer; Zwecker, Manuel; Dynia, Aida

    2007-10-01

    With the development of computer technology and the high-tech electronic industry over the past 30 years, the technological age is flourishing. New technologies are continually being introduced, and questions regarding the economic viability of these technologies need to be addressed. To identify the medical technologies currently in use in different rehabilitation medicine settings in Israel. The TECHNO-R 2005 survey was conducted in two phases. Beginning in 2004, the first survey used a questionnaire with open questions relating to the different technologies in clinical use, including questions on their purpose, who operates the device (technician, physiotherapist, occupational therapist, physician, etc.), and a description of the treated patients. This questionnaire was sent to 31 rehabilitation medicine facilities in Israel. Due to difficulties in comprehension of the term "technology," a second revised standardized questionnaire with closed-ended questions specifying diverse technologies was introduced in 2005. The responder had to mark from a list of 15 different medical technologies which were in use in his or her facility, as well as their purpose, who operates the device, and a description of the treated patients. Transcutaneous electrical nerve stimulation, the TILT bed, continuous passive movement, and therapeutic ultrasound were the most widely used technologies in rehabilitation medicine facilities. Monitoring of the sitting position in the wheelchair, at the bottom of the list, was found to be the least used technology (with 15.4% occurrence). Most of the technologies are used primarily for treatment purposes and to a lesser degree for diagnosis and research. Our study poses a fundamental semantic and conceptual question regarding what kind of technologies are or should be part of the standard equipment of any accredited rehabilitation medicine facility for assessment, treatment and/or research. For this purpose, additional data are needed.

  4. Integration of the White Sands Complex into a Wide Area Network

    NASA Technical Reports Server (NTRS)

    Boucher, Phillip Larry; Horan, Sheila, B.

    1996-01-01

    The NASA White Sands Complex (WSC) satellite communications facility consists of two main ground stations, an auxiliary ground station, a technical support facility, and a power plant building located on White Sands Missile Range. When constructed, terrestrial communication access to these facilities was limited to copper telephone circuits. There was no local or wide area communications network capability. This project incorporated a baseband local area network (LAN) topology at WSC and connected it to NASA's wide area network using the Program Support Communications Network-Internet (PSCN-I). A campus-style LAN is configured in conformance with the International Standards Organization (ISO) Open Systems Interconnect (ISO) model. Ethernet provides the physical and data link layers. Transmission Control Protocol and Internet Protocol (TCP/IP) are used for the network and transport layers. The session, presentation, and application layers employ commercial software packages. Copper-based Ethernet collision domains are constructed in each of the primary facilities and these are interconnected by routers over optical fiber links. The network and each of its collision domains are shown to meet IEEE technical configuration guidelines. The optical fiber links are analyzed for the optical power budget and bandwidth allocation and are found to provide sufficient margin for this application. Personal computers and work stations attached to the LAN communicate with and apply a wide variety of local and remote administrative software tools. The Internet connection provides wide area network (WAN) electronic access to other NASA centers and the world wide web (WWW). The WSC network reduces and simplifies the administrative workload while providing enhanced and advanced inter-communications capabilities among White Sands Complex departments and with other NASA centers.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less

  6. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  7. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  8. Key Issues in Instructional Computer Graphics.

    ERIC Educational Resources Information Center

    Wozny, Michael J.

    1981-01-01

    Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…

  9. EPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL - 2002

    EPA Science Inventory

    To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...

  10. Telecommunications and Data Communication in Korea.

    ERIC Educational Resources Information Center

    Ahn, Moon-Suk

    All facilities of the Ministry of Communications of Korea, which monopolizes telecommunications services in the country, are listed and described. Both domestic facilities, including long-distance telephone and telegraph circuits, and international connections are included. Computer facilities are also listed. The nation's regulatory policies are…

  11. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  12. Self port scanning tool : providing a more secure computing Environment through the use of proactive port scanning

    NASA Technical Reports Server (NTRS)

    Kocher, Joshua E; Gilliam, David P.

    2005-01-01

    Secure computing is a necessity in the hostile environment that the internet has become. Protection from nefarious individuals and organizations requires a solution that is more a methodology than a one time fix. One aspect of this methodology is having the knowledge of which network ports a computer has open to the world, These network ports are essentially the doorways from the internet into the computer. An assessment method which uses the nmap software to scan ports has been developed to aid System Administrators (SAs) with analysis of open ports on their system(s). Additionally, baselines for several operating systems have been developed so that SAs can compare their open ports to a baseline for a given operating system. Further, the tool is deployed on a website where SAs and Users can request a port scan of their computer. The results are then emailed to the requestor. This tool aids Users, SAs, and security professionals by providing an overall picture of what services are running, what ports are open, potential trojan programs or backdoors, and what ports can be closed.

  13. The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.

    2017-03-01

    It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.

  14. 75 FR 51500 - Advisory Committee on Reactor Safeguards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-20

    ..., October 14, 2009 (74 FR 52829-52830). Thursday, September 9, 2010, Conference Room T2-B1, Two White Flint... Fabrication Facility and the Associated Safety Evaluation Report (Open/ Closed)--The Committee will hold... the MOX Fuel Fabrication Facility and the associated Safety Evaluation Report. [Note: A portion of...

  15. Hubble Space Telescope (HST) at Lockheed Facility during preflight assembly

    NASA Image and Video Library

    1988-03-31

    A mechanical arm positions the axial scientific instrument (SI) module (orbital replacement unit (ORU)) just outside the open doors of the Hubble Space Telescope (HST) Support System Module (SSM) as clean-suited technicians oversee the process. HST assembly is being completed at the Lockheed Facility in Sunnyvale, California.

  16. 76 FR 59121 - Notice of Availability of the Record of Decision for the Final Environmental Impact Statement...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-23

    ... lined open channels; grade control structures; bridges and drainage crossings; building pads; and water quality control facilities (sedimentation control, flood control, debris, and water quality basins). The... facilities (sedimentation control, flood debris, and water quality basins); regular and ongoing maintenance...

  17. The Practical Considerations of Opening School Facilities to Lifelong Learning.

    ERIC Educational Resources Information Center

    Odell, John H.

    1997-01-01

    There are many reasons why a school should plan for extending its facilities' hours of use and making them available to the community. Benefits include improving cost effectiveness in using limited resources, improving security, promoting the school, enhancing staff's potential to offer industry-related training and expertise, and enriching…

  18. 76 FR 50289 - Notice of Funding Availability for the Department of Transportation's National Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ..., 2011, at 5 p.m. EDT (the ``Application Deadline''). The DOT pre-application system will open on or... systems, and projects that connect transportation facilities to other modes of transportation; and (2... existing transportation facilities and systems, with particular emphasis on projects that minimize life...

  19. PROGRAMMABLE EXPOSURE CONTROL SYSTEM FOR DETERMINATION OF THE EFFECTS OF POLLUTANT EXPOSURE REGIMES ON PLANT GROWTH

    EPA Science Inventory

    A field-exposure research facility was constructed to provide a controlled environment to determine the influence of the various components of ozone exposure on plant response. The facility uses modified open-top chambers and an automated control system for continuous delivery an...

  20. Helping Principals Make Better Use of Existing Facilities.

    ERIC Educational Resources Information Center

    Fredrickson, John H.

    The findings of two studies establish some deleterious effects of unsatisfactory physical environment on school children. However, modern technology enables the exercise of total environmental control in new and existing facilities. Between the realities of today and the exceptations of tomorrow we have a transitional model--the open plan concept.…

  1. Overview of the NASA Dryden Flight Research Facility aeronautical flight projects

    NASA Technical Reports Server (NTRS)

    Meyer, Robert R., Jr.

    1992-01-01

    Several principal aerodynamics flight projects of the NASA Dryden Flight Research Facility are discussed. Key vehicle technology areas from a wide range of flight vehicles are highlighted. These areas include flight research data obtained for ground facility and computation correlation, applied research in areas not well suited to ground facilities (wind tunnels), and concept demonstration.

  2. Sea/Lake Water Air Conditioning at Naval Facilities.

    DTIC Science & Technology

    1980-05-01

    ECONOMICS AT TWO FACILITIES ......... ................... 2 Facilities ........... .......................... 2 Computer Models...of an operational test at Naval Security Group Activity (NSGA) Winter Harbor, Me., and the economics of Navywide application. In FY76 an assessment of... economics of Navywide application of sea/lake water AC indicated that cost and energy savings at the sites of some Naval facilities are possible, depending

  3. Automated smear counting and data processing using a notebook computer in a biomedical research facility.

    PubMed

    Ogata, Y; Nishizawa, K

    1995-10-01

    An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.

  4. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  5. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  6. Energy consumption and load profiling at major airports. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.

    1998-12-01

    This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less

  7. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  8. KSC-2013-2995

    NASA Image and Video Library

    2013-06-29

    CAPE CANAVERAL, Fla. -- At the Kennedy Space Center Visitor Complex in Florida, CNN correspondent John Zarrella counted down for the ceremonial opening of the new "Space Shuttle Atlantis" facility. Smoke bellows near a full-scale set of space shuttle twin solid rocket boosters and external fuel tank at the entrance to the exhibit building. Looking on after pressing buttons to mark the opening the new exhibit, are, from the left, Charlie Bolden, NASA administrator, Bob Cabana, Kennedy director, Rick Abramson, Delaware North Parks and Resorts president, and Bill Moore, Delaware North Parks and Resorts chief operating officer. The new $100 million facility includes interactive exhibits that tell the story of the 30-year Space Shuttle Program and highlight the future of space exploration. The "Space Shuttle Atlantis" exhibit formally opened to the public on June 29, 2013.Photo credit: NASA/Jim Grossmann

  9. Quality improvement nursing facilities: a nursing leadership perspective.

    PubMed

    Adams-Wendling, Linda; Lee, Robert

    2005-11-01

    The purposes of this study were to characterize the state of quality improvement (QI) in nursing facilities and to identify barriers to improvement from nursing leaders' perspectives. The study employed a non-experimental descriptive design, using closed- and open-ended survey questions in a sample of 51 nursing facilities in a midwestern state. Only two of these facilities had active QI programs. Furthermore, turnover and limited training among these nursing leaders represented major barriers to rapid implementation of such programs. This study is consistent with earlier findings that QI programs are limited in nursing homes.

  10. Open Mess Management Career Ladder AFS 742X0 and CEM Code 74200.

    DTIC Science & Technology

    1980-12-01

    I. OPEN MESS MANAGERS (SPC049, N=187) 11. FOOD / BEVERAGE OPERATIONS ASSISTANI MANAGERS ’LUSTER (GRP076, N=92) a. Bar and Operations Managers (GKP085...said they will or probably will reenlist. 1I. FOOD / BEVERAGE OPERATIONS ASSISTANT MANAGERS CLUSTER (GRP076).- This cluster of 9-2 reslpo nrts-(23...operation of open mess food and beverage functions. The majority of these airmen identify themselves as Assistant Managers of open mess facilities and are

  11. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Microsoft Repository Version 2 and the Open Information Model.

    ERIC Educational Resources Information Center

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  13. 40 CFR 256.22 - Recommendations for State regulatory powers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid Waste... prohibit new open dumps and close or upgrade all existing open dumps. (a) Solid waste disposal standards... solid waste disposal facility. These procedures should include identification of future land use or the...

  14. 40 CFR 256.22 - Recommendations for State regulatory powers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid Waste... prohibit new open dumps and close or upgrade all existing open dumps. (a) Solid waste disposal standards... solid waste disposal facility. These procedures should include identification of future land use or the...

  15. THE COMPUTER AS A MANAGEMENT TOOL--PHYSICAL FACILITIES INVENTORIES, UTILIZATION, AND PROJECTIONS. 11TH ANNUAL MACHINE RECORDS CONFERENCE PROCEEDINGS (UNIVERSITY OF TENNESSEE, KNOXVILLE, APRIL 25-27, 1966).

    ERIC Educational Resources Information Center

    WITMER, DAVID R.

    WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…

  16. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  17. 120. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  18. 119. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  19. Conceptual design of the MHD Engineering Test Facility

    NASA Technical Reports Server (NTRS)

    Bents, D. J.; Bercaw, R. W.; Burkhart, J. A.; Mroz, T. S.; Rigo, H. S.; Pearson, C. V.; Warinner, D. K.; Hatch, A. M.; Borden, M.; Giza, D. A.

    1981-01-01

    The reference conceptual design of the MHD engineering test facility, a prototype 200 MWe coal-fired electric generating plant designed to demonstrate the commerical feasibility of open cycle MHD is summarized. Main elements of the design are identified and explained, and the rationale behind them is reviewed. Major systems and plant facilities are listed and discussed. Construction cost and schedule estimates are included and the engineering issues that should be reexamined are identified.

  20. Computer-Based Learning in Open and Distance Learning Institutions in Nigeria: Cautions on Use of Internet for Counseling

    ERIC Educational Resources Information Center

    Okopi, Fidel Onjefu; Odeyemi, Olajumoke Janet; Adesina, Adewale

    2015-01-01

    The study has identified the areas of strengths and weaknesses in the current use of Computer Based Learning (CBL) tools in Open and Distance Learning (ODL) institutions in Nigeria. To achieve these objectives, the following research questions were proposed: (i) What are the computer-based learning tools (soft and hard ware) that are actually in…

  1. USE OF COMPUTER-AIDED PROCESS ENGINEERING TOOL IN POLLUTION PREVENTION

    EPA Science Inventory

    Computer-Aided Process Engineering has become established in industry as a design tool. With the establishment of the CAPE-OPEN software specifications for process simulation environments. CAPE-OPEN provides a set of "middleware" standards that enable software developers to acces...

  2. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  3. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  4. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  5. 33 CFR 106.305 - Facility Security Assessment (FSA) requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...

  6. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less

  7. OpenMx: An Open Source Extended Structural Equation Modeling Framework

    ERIC Educational Resources Information Center

    Boker, Steven; Neale, Michael; Maes, Hermine; Wilde, Michael; Spiegel, Michael; Brick, Timothy; Spies, Jeffrey; Estabrook, Ryne; Kenny, Sarah; Bates, Timothy; Mehta, Paras; Fox, John

    2011-01-01

    OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the "R" statistical programming environment on Windows, Mac OS-X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are…

  8. Numerical Predictions of Mode Reflections in an Open Circular Duct: Comparison with Theory

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray

    2015-01-01

    The NASA Broadband Aeroacoustic Stator Simulation code was used to compute the acoustic field for higher-order modes in a circular duct geometry. To test the accuracy of the results computed by the code, the duct was terminated by an open end with an infinite flange or no flange. Both open end conditions have a theoretical solution that was used to compare with the computed results. Excellent comparison for reflection matrix values was achieved after suitable refinement of the grid at the open end. The study also revealed issues with the level of the mode amplitude introduced into the acoustic held from the source boundary and the amount of reflection that occurred at the source boundary when a general nonreflecting boundary condition was applied.

  9. Progress on the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-12-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.

  10. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  11. The Education Value of Cloud Computing

    ERIC Educational Resources Information Center

    Katzan, Harry, Jr.

    2010-01-01

    Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…

  12. Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.

    PubMed

    Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra

    2007-12-01

    The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.

  13. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  14. Open Systems Interconnection.

    ERIC Educational Resources Information Center

    Denenberg, Ray

    1985-01-01

    Discusses the need for standards allowing computer-to-computer communication and gives examples of technical issues. The seven-layer framework of the Open Systems Interconnection (OSI) Reference Model is explained and illustrated. Sidebars feature public data networks and Recommendation X.25, OSI standards, OSI layer functions, and a glossary.…

  15. Schweickart and guest at ASVC prior to grand opening

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Apollo 9 Lunar Module Pilot Russell L. Schweikart poses in front of an Apollo Command and Service Module in the the new Apollo/Saturn V Center (ASVC) at KSC prior to the gala grand opening ceremony for the facility that was held Jan. 8, 1997. Several Apollo astronauts were invited to participate in the event, which also featured NASA Administrator Dan Goldin and KSC Director Jay Honeycutt. The ASVC also features several other Apollo program spacecraft components, multimedia presentations and a simulated Apollo/Saturn V liftoff. The facility will be a part of the KSC bus tour that embarks from the KSC Visitor Center.

  16. First International Symposium on Strain Gauge Balances. Pt. 1

    NASA Technical Reports Server (NTRS)

    Tripp, John S. (Editor); Tcheng, Ping (Editor)

    1999-01-01

    The first International Symposium on Strain Gauge Balances was sponsored and held at NASA Langley Research Center during October 22-25, 1996. The symposium provided an open international forum for presentation, discussion, and exchange of technical information among wind tunnel test technique specialists and strain gauge balance designers. The Symposium also served to initiate organized professional activities among the participating and relevant international technical communities. Over 130 delegates from 15 countries were in attendance. The program opened with a panel discussion, followed by technical paper sessions, and guided tours of the National Transonic Facility (NTF) wind tunnel, a local commercial balance fabrication facility, and the LaRC balance calibration laboratory. The opening panel discussion addressed "Future Trends in Balance Development and Applications." Forty-six technical papers were presented in 11 technical sessions covering the following areas: calibration, automatic calibration, data reduction, facility reports, design, accuracy and uncertainty analysis, strain gauges, instrumentation, balance design, thermal effects, finite element analysis, applications, and special balances. At the conclusion of the Symposium, a steering committee representing most of the nations and several U.S. organizations attending the Symposium was established to initiate planning for a second international balance symposium, to be held in 1999 in the UK.

  17. First International Symposium on Strain Gauge Balances. Part 2

    NASA Technical Reports Server (NTRS)

    Tripp, John S (Editor); Tcheng, Ping (Editor)

    1999-01-01

    The first International Symposium on Strain Gauge Balances was sponsored and held at NASA Langley Research Center during October 22-25, 1996. The symposium provided an open international forum for presentation, discussion, and exchange of technical information among wind tunnel test technique specialists and strain gauge balance designers. The Symposium also served to initiate organized professional activities among the participating and relevant international technical communities. Over 130 delegates from 15 countries were in attendance. The program opened with a panel discussion, followed by technical paper sessions, and guided tours of the National Transonic Facility (NTF) wind tunnel, a local commercial balance fabrication facility, and the LaRC balance calibration laboratory. The opening panel discussion addressed "Future Trends in Balance Development and Applications." Forty-six technical papers were presented in 11 technical sessions covering the following areas: calibration, automatic calibration, data reduction, facility reports, design, accuracy and uncertainty analysis, strain gauges, instrumentation, balance design, thermal effects, finite element analysis, applications, and special balances. At the conclusion of the Symposium, a steering committee representing most of the nations and several U.S. organizations attending the Symposium was established to initiate planning for a second international balance symposium, to be held in 1999 in the UK.

  18. 10 CFR 20.1906 - Procedures for receiving and opening packages.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Procedures for receiving and opening packages. 20.1906 Section 20.1906 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION... received at the licensee's facility if it is received during the licensee's normal working hours, or not...

  19. 10 CFR 20.1906 - Procedures for receiving and opening packages.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Procedures for receiving and opening packages. 20.1906 Section 20.1906 Energy NUCLEAR REGULATORY COMMISSION STANDARDS FOR PROTECTION AGAINST RADIATION... received at the licensee's facility if it is received during the licensee's normal working hours, or not...

  20. Valve For Extracting Samples From A Process Stream

    NASA Technical Reports Server (NTRS)

    Callahan, Dave

    1995-01-01

    Valve for extracting samples from process stream includes cylindrical body bolted to pipe that contains stream. Opening in valve body matched and sealed against opening in pipe. Used to sample process streams in variety of facilities, including cement plants, plants that manufacture and reprocess plastics, oil refineries, and pipelines.

Top