Sample records for computer-integrated facilities management

  1. Facilities Management via Computer: Information at Your Fingertips.

    ERIC Educational Resources Information Center

    Hensey, Susan

    1996-01-01

    Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)

  2. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  3. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  4. Multi-objective reverse logistics model for integrated computer waste management.

    PubMed

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  5. The Vendors' Corner: Biblio-Techniques' Library and Information System (BLIS).

    ERIC Educational Resources Information Center

    Library Software Review, 1984

    1984-01-01

    Describes online catalog and integrated library computer system designed to enhance Washington Library Network's software. Highlights include system components; implementation options; system features (integrated library functions, database design, system management facilities); support services (installation and training, software maintenance and…

  6. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    PubMed

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  7. KSC-06pd1204

    NASA Image and Video Library

    2006-06-23

    KENNEDY SPACE CENTER, FLA. - An overview of the new Firing Room 4 shows the expanse of computer stations and the various operations the facility will be able to manage. FR4 is now designated the primary firing room for all remaining shuttle launches, and will also be used daily to manage operations in the Orbiter Processing Facilities and for integrated processing for the shuttle. The firing room now includes sound-suppressing walls and floors, new humidity control, fire-suppression systems and consoles, support tables with computer stations, communication systems and laptop computer ports. FR 4 also has power and computer network connections and a newly improved Checkout, Control and Monitor Subsystem. The renovation is part of the Launch Processing System Extended Survivability Project that began in 2003. United Space Alliance's Launch Processing System directorate managed the FR 4 project for NASA. Photo credit: NASA/Dimitri Gerondidakis

  8. Quality assurance planning for lunar Mars exploration

    NASA Technical Reports Server (NTRS)

    Myers, Kay

    1991-01-01

    A review is presented of the tools and techniques required to meet the challenge of total quality in the goal of traveling to Mars and returning to the moon. One program used by NASA to ensure the integrity of baselined requirements documents is configuration management (CM). CM is defined as an integrated management process that documents and identifies the functional and physical characteristics of a facility's systems, structures, computer software, and components. It also ensures that changes to these characteristics are properly assessed, developed, approved, implemented, verified, recorded, and incorporated into the facility's documentation. Three principal areas are discussed that will realize significant efficiencies and enhanced effectiveness, change assessment, change avoidance, and requirements management.

  9. Managing geometric information with a data base management system

    NASA Technical Reports Server (NTRS)

    Dube, R. P.

    1984-01-01

    The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.

  10. Computer-Aided Facilities Management Systems (CAFM).

    ERIC Educational Resources Information Center

    Cyros, Kreon L.

    Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…

  11. Microgrids | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Manager, Marine Corps Air Station (MCAS) Miramar Network Simulator-in-the-Loop Testing OMNeT++: simulates a network and links with real computers and virtual hosts. Power Hardware-in-the-Loop Simulation

  12. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  13. The Integrated Waste Tracking System - A Flexible Waste Management Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert Stephen

    2001-02-01

    The US Department of Energy (DOE) Idaho National Engineering and Environmental Laboratory (INEEL) has fully embraced a flexible, computer-based tool to help increase waste management efficiency and integrate multiple operational functions from waste generation through waste disposition while reducing cost. The Integrated Waste Tracking System (IWTS)provides comprehensive information management for containerized waste during generation,storage, treatment, transport, and disposal. The IWTS provides all information necessary for facilities to properly manage and demonstrate regulatory compliance. As a platformindependent, client-server and Web-based inventory and compliance system, the IWTS has proven to be a successful tracking, characterization, compliance, and reporting tool that meets themore » needs of both operations and management while providing a high level of management flexibility.« less

  14. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  15. A Medical Decision Support System for the Space Station Health Maintenance Facility

    PubMed Central

    Ostler, David V.; Gardner, Reed M.; Logan, James S.

    1988-01-01

    NASA is developing a Health Maintenance Facility (HMF) to provide the equipment and supplies necessary to deliver medical care in the Space Station. An essential part of the Health Maintenance Facility is a computerized Medical Decision Support System (MDSS) that will enhance the ability of the medical officer (“paramedic” or “physician”) to maintain the crew's health, and to provide emergency medical care. The computer system has four major functions: 1) collect and integrate medical information into an electronic medical record from Space Station medical officers, HMF instrumentation, and exercise equipment; 2) provide an integrated medical record and medical reference information management system; 3) manage inventory for logistical support of supplies and secure pharmaceuticals; 4) supply audio and electronic mail communications between the medical officer and ground based flight surgeons. ImagesFigure 1

  16. Integration of design and inspection

    NASA Astrophysics Data System (ADS)

    Simmonds, William H.

    1990-08-01

    Developments in advanced computer integrated manufacturing technology, coupled with the emphasis on Total Quality Management, are exposing needs for new techniques to integrate all functions from design through to support of the delivered product. One critical functional area that must be integrated into design is that embracing the measurement, inspection and test activities necessary for validation of the delivered product. This area is being tackled by a collaborative project supported by the UK Government Department of Trade and Industry. The project is aimed at developing techniques for analysing validation needs and for planning validation methods. Within the project an experimental Computer Aided Validation Expert system (CAVE) is being constructed. This operates with a generalised model of the validation process and helps with all design stages: specification of product requirements; analysis of the assurance provided by a proposed design and method of manufacture; development of the inspection and test strategy; and analysis of feedback data. The kernel of the system is a knowledge base containing knowledge of the manufacturing process capabilities and of the available inspection and test facilities. The CAVE system is being integrated into a real life advanced computer integrated manufacturing facility for demonstration and evaluation.

  17. Environmental Management System

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. A facility for training Space Station astronauts

    NASA Technical Reports Server (NTRS)

    Hajare, Ankur R.; Schmidt, James R.

    1992-01-01

    The Space Station Training Facility (SSTF) will be the primary facility for training the Space Station Freedom astronauts and the Space Station Control Center ground support personnel. Conceptually, the SSTF will consist of two parts: a Student Environment and an Author Environment. The Student Environment will contain trainers, instructor stations, computers and other equipment necessary for training. The Author Environment will contain the systems that will be used to manage, develop, integrate, test and verify, operate and maintain the equipment and software in the Student Environment.

  19. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  20. Development of a change management system

    NASA Technical Reports Server (NTRS)

    Parks, Cathy Bonifas

    1993-01-01

    The complexity and interdependence of software on a computer system can create a situation where a solution to one problem causes failures in dependent software. In the computer industry, software problems arise and are often solved with 'quick and dirty' solutions. But in implementing these solutions, documentation about the solution or user notification of changes is often overlooked, and new problems are frequently introduced because of insufficient review or testing. These problems increase when numerous heterogeneous systems are involved. Because of this situation, a change management system plays an integral part in the maintenance of any multisystem computing environment. At the NASA Ames Advanced Computational Facility (ACF), the Online Change Management System (OCMS) was designed and developed to manage the changes being applied to its multivendor computing environment. This paper documents the research, design, and modifications that went into the development of this change management system (CMS).

  1. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  2. Providing security for automated process control systems at hydropower engineering facilities

    NASA Astrophysics Data System (ADS)

    Vasiliev, Y. S.; Zegzhda, P. D.; Zegzhda, D. P.

    2016-12-01

    This article suggests the concept of a cyberphysical system to manage computer security of automated process control systems at hydropower engineering facilities. According to the authors, this system consists of a set of information processing tools and computer-controlled physical devices. Examples of cyber attacks on power engineering facilities are provided, and a strategy of improving cybersecurity of hydropower engineering systems is suggested. The architecture of the multilevel protection of the automated process control system (APCS) of power engineering facilities is given, including security systems, control systems, access control, encryption, secure virtual private network of subsystems for monitoring and analysis of security events. The distinctive aspect of the approach is consideration of interrelations and cyber threats, arising when SCADA is integrated with the unified enterprise information system.

  3. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  4. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  5. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  6. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    NASA Astrophysics Data System (ADS)

    Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.

    2014-06-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  7. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  8. Improving NAVFAC's total quality management of construction drawings with CLIPS

    NASA Technical Reports Server (NTRS)

    Antelman, Albert

    1991-01-01

    A diagnostic expert system to improve the quality of Naval Facilities Engineering Command (NAVFAC) construction drawings and specification is described. C Language Integrated Production System (CLIPS) and computer aided design layering standards are used in an expert system to check and coordinate construction drawings and specifications to eliminate errors and omissions.

  9. Highly integrated digital engine control system on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Haering, E. A., Jr.

    1984-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. This system is being used on the F-15 airplane at the Dryden Flight Research Facility of NASA Ames Research Center. An integrated flightpath management mode and an integrated adaptive engine stall margin mode are being implemented into the system. The adaptive stall margin mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the engine stall margin are continuously computed; the excess stall margin is used to uptrim the engine for more thrust. The integrated flightpath management mode optimizes the flightpath and throttle setting to reach a desired flight condition. The increase in thrust and the improvement in airplane performance is discussed in this paper.

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  11. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  12. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  13. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1999-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  14. Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet

    1998-01-01

    The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.

  15. Description of a dual fail operational redundant strapdown inertial measurement unit for integrated avionics systems research

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.; Morrell, F. R.

    1981-01-01

    An experimental redundant strapdown inertial measurement unit (RSDIMU) is developed as a link to satisfy safety and reliability considerations in the integrated avionics concept. The unit includes four two degree-of-freedom tuned rotor gyros, and four accelerometers in a skewed and separable semioctahedral array. These sensors are coupled to four microprocessors which compensate sensor errors. These microprocessors are interfaced with two flight computers which process failure detection, isolation, redundancy management, and general flight control/navigation algorithms. Since the RSDIMU is a developmental unit, it is imperative that the flight computers provide special visibility and facility in algorithm modification.

  16. Hanford Site Composite Analysis Technical Approach Description: Integrated Computational Framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, K. J.

    2017-09-14

    The U.S. Department of Energy (DOE) in DOE O 435.1 Chg. 1, Radioactive Waste Management, requires the preparation and maintenance of a composite analysis (CA). The primary purpose of the CA is to provide a reasonable expectation that the primary public dose limit is not likely to be exceeded by multiple source terms that may significantly interact with plumes originating at a low-level waste disposal facility. The CA is used to facilitate planning and land use decisions that help assure disposal facility authorization will not result in long-term compliance problems; or, to determine management alternatives, corrective actions, or assessment needsmore » if potential problems are identified.« less

  17. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  18. A new approach to the design of information systems for foodservice management in health care facilities.

    PubMed

    Matthews, M E; Norback, J P

    1984-06-01

    An organizational framework for integrating foodservice data into an information system for management decision making is presented. The framework involves the application to foodservice of principles developed by the disciplines of managerial economics and accounting, mathematics, computer science, and information systems. The first step is to conceptualize a foodservice system from an input-output perspective, in which inputs are units of resources available to managers and outputs are servings of menu items. Next, methods of full cost accounting, from the management accounting literature, are suggested as a mechanism for developing and assigning costs of using resources within a foodservice operation. Then matrix multiplication is used to illustrate types of information that matrix data structures could make available for management planning and control when combined with a conversational mode of computer programming.

  19. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  20. Achieving production-level use of HEP software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.

    2015-12-01

    HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.

  1. A user view of office automation or the integrated workstation

    NASA Technical Reports Server (NTRS)

    Schmerling, E. R.

    1984-01-01

    Central data bases are useful only if they are kept up to date and easily accessible in an interactive (query) mode rather than in monthly reports that may be out of date and must be searched by hand. The concepts of automatic data capture, data base management and query languages require good communications and readily available work stations to be useful. The minimal necessary work station is a personal computer which can be an important office tool if connected into other office machines and properly integrated into an office system. It has a great deal of flexibility and can often be tailored to suit the tastes, work habits and requirements of the user. Unlike dumb terminals, there is less tendency to saturate a central computer, since its free standing capabilities are available after down loading a selection of data. The PC also permits the sharing of many other facilities, like larger computing power, sophisticated graphics programs, laser printers and communications. It can provide rapid access to common data bases able to provide more up to date information than printed reports. Portable computers can access the same familiar office facilities from anywhere in the world where a telephone connection can be made.

  2. NIF ICCS network design and loading analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietbohl, G; Bryant, R

    The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less

  3. Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's

    NASA Technical Reports Server (NTRS)

    Zobrist, George W.

    1994-01-01

    This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.

  4. Systems Engineering and Integration (SE and I)

    NASA Technical Reports Server (NTRS)

    Chevers, ED; Haley, Sam

    1990-01-01

    The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.

  5. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  6. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  7. Developing Mobile BIM/2D Barcode-Based Automated Facility Management System

    PubMed Central

    Chen, Yen-Pei

    2014-01-01

    Facility management (FM) has become an important topic in research on the operation and maintenance phase. Managing the work of FM effectively is extremely difficult owing to the variety of environments. One of the difficulties is the performance of two-dimensional (2D) graphics when depicting facilities. Building information modeling (BIM) uses precise geometry and relevant data to support the facilities depicted in three-dimensional (3D) object-oriented computer-aided design (CAD). This paper proposes a new and practical methodology with application to FM that uses an integrated 2D barcode and the BIM approach. Using 2D barcode and BIM technologies, this study proposes a mobile automated BIM-based facility management (BIMFM) system for FM staff in the operation and maintenance phase. The mobile automated BIMFM system is then applied in a selected case study of a commercial building project in Taiwan to verify the proposed methodology and demonstrate its effectiveness in FM practice. The combined results demonstrate that a BIMFM-like system can be an effective mobile automated FM tool. The advantage of the mobile automated BIMFM system lies not only in improving FM work efficiency for the FM staff but also in facilitating FM updates and transfers in the BIM environment. PMID:25250373

  8. Developing mobile BIM/2D barcode-based automated facility management system.

    PubMed

    Lin, Yu-Cheng; Su, Yu-Chih; Chen, Yen-Pei

    2014-01-01

    Facility management (FM) has become an important topic in research on the operation and maintenance phase. Managing the work of FM effectively is extremely difficult owing to the variety of environments. One of the difficulties is the performance of two-dimensional (2D) graphics when depicting facilities. Building information modeling (BIM) uses precise geometry and relevant data to support the facilities depicted in three-dimensional (3D) object-oriented computer-aided design (CAD). This paper proposes a new and practical methodology with application to FM that uses an integrated 2D barcode and the BIM approach. Using 2D barcode and BIM technologies, this study proposes a mobile automated BIM-based facility management (BIMFM) system for FM staff in the operation and maintenance phase. The mobile automated BIMFM system is then applied in a selected case study of a commercial building project in Taiwan to verify the proposed methodology and demonstrate its effectiveness in FM practice. The combined results demonstrate that a BIMFM-like system can be an effective mobile automated FM tool. The advantage of the mobile automated BIMFM system lies not only in improving FM work efficiency for the FM staff but also in facilitating FM updates and transfers in the BIM environment.

  9. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  10. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole process, while maintaining the integrity of the latest technological displayed image devices. This study will provide insights to the many possibilities that can be filtered down to a harmoniously responsive product that can be used in today's MCC environment.

  11. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  12. Trends in Facility Management Technology: The Emergence of the Internet, GIS, and Facility Assessment Decision Support.

    ERIC Educational Resources Information Center

    Teicholz, Eric

    1997-01-01

    Reports research on trends in computer-aided facilities management using the Internet and geographic information system (GIS) technology for space utilization research. Proposes that facility assessment software holds promise for supporting facility management decision making, and outlines four areas for its use: inventory; evaluation; reporting;…

  13. PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities.

    PubMed

    Troshin, Peter V; Postis, Vincent Lg; Ashworth, Denise; Baldwin, Stephen A; McPherson, Michael J; Barton, Geoffrey J

    2011-03-07

    Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/.

  14. Integrity-Based Budgeting

    ERIC Educational Resources Information Center

    Kaleba, Frank

    2008-01-01

    The central problem for the facility manager of large portfolios is not the accuracy of data, but rather data integrity. Data integrity means that it's (1) acceptable to the users; (2) based upon an objective source; (3) reproducible; and (4) internally consistent. Manns and Katsinas, in their January/February 2006 Facilities Manager article…

  15. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  16. The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.

    ERIC Educational Resources Information Center

    Lach, Ivan J.

    The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…

  17. KSC-06pd1203

    NASA Image and Video Library

    2006-06-23

    KENNEDY SPACE CENTER, FLA. - NASA Test Director Ted Mosteller (center) briefs the media about Firing Room 4 (FR4), which has been undergoing renovations for two years. FR4 is now designated the primary firing room for all remaining shuttle launches, and will also be used daily to manage operations in the Orbiter Processing Facilities and for integrated processing for the shuttle. The firing room now includes sound-suppressing walls and floors, new humidity control, fire-suppression systems and consoles, support tables with computer stations, communication systems and laptop computer ports. FR 4 also has power and computer network connections and a newly improved Checkout, Control and Monitor Subsystem. The renovation is part of the Launch Processing System Extended Survivability Project that began in 2003. United Space Alliance's Launch Processing System directorate managed the FR 4 project for NASA. Photo credit: NASA/Dimitri Gerondidakis

  18. KSC-06pd1202

    NASA Image and Video Library

    2006-06-23

    KENNEDY SPACE CENTER, FLA. - NASA Test Director Ted Mosteller (right) briefs the media about Firing Room 4 (FR4), which has been undergoing renovations for two years. FR4 is now designated the primary firing room for all remaining shuttle launches, and will also be used daily to manage operations in the Orbiter Processing Facilities and for integrated processing for the shuttle. The firing room now includes sound-suppressing walls and floors, new humidity control, fire-suppression systems and consoles, support tables with computer stations, communication systems and laptop computer ports. FR 4 also has power and computer network connections and a newly improved Checkout, Control and Monitor Subsystem. The renovation is part of the Launch Processing System Extended Survivability Project that began in 2003. United Space Alliance's Launch Processing System directorate managed the FR 4 project for NASA. Photo credit: NASA/Dimitri Gerondidakis

  19. KSC-06pd1201

    NASA Image and Video Library

    2006-06-23

    KENNEDY SPACE CENTER, FLA. - Ted Mosteller (right), NASA test director, briefs the media about Firing Room 4 (FR4), which has been undergoing renovations for two years. FR4 is now designated the primary firing room for all remaining shuttle launches, and will also be used daily to manage operations in the Orbiter Processing Facilities and for integrated processing for the shuttle. The firing room now includes sound-suppressing walls and floors, new humidity control, fire-suppression systems and consoles, support tables with computer stations, communication systems and laptop computer ports. FR 4 also has power and computer network connections and a newly improved Checkout, Control and Monitor Subsystem. The renovation is part of the Launch Processing System Extended Survivability Project that began in 2003. United Space Alliance's Launch Processing System directorate managed the FR 4 project for NASA. Photo credit: NASA/Dimitri Gerondidakis

  20. EPA Facility Registry Service (FRS): TRI

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Toxic Release Inventory (TRI) System. TRI is a publicly available EPA database reported annually by certain covered industry groups, as well as federal facilities. It contains information about more than 650 toxic chemicals that are being used, manufactured, treated, transported, or released into the environment, and includes information about waste management and pollution prevention activities. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to TRI facilities once the TRI data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  1. Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet

    1998-01-01

    The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.

  2. Optimal control of greenhouse gas emissions and system cost for integrated municipal solid waste management with considering a hierarchical structure.

    PubMed

    Li, Jing; He, Li; Fan, Xing; Chen, Yizhong; Lu, Hongwei

    2017-08-01

    This study presents a synergic optimization of control for greenhouse gas (GHG) emissions and system cost in integrated municipal solid waste (MSW) management on a basis of bi-level programming. The bi-level programming is formulated by integrating minimizations of GHG emissions at the leader level and system cost at the follower level into a general MSW framework. Different from traditional single- or multi-objective approaches, the proposed bi-level programming is capable of not only addressing the tradeoffs but also dealing with the leader-follower relationship between different decision makers, who have dissimilar perspectives interests. GHG emission control is placed at the leader level could emphasize the significant environmental concern in MSW management. A bi-level decision-making process based on satisfactory degree is then suitable for solving highly nonlinear problems with computationally effectiveness. The capabilities and effectiveness of the proposed bi-level programming are illustrated by an application of a MSW management problem in Canada. Results show that the obtained optimal management strategy can bring considerable revenues, approximately from 76 to 97 million dollars. Considering control of GHG emissions, it would give priority to the development of the recycling facility throughout the whole period, especially in latter periods. In terms of capacity, the existing landfill is enough in the future 30 years without development of new landfills, while expansion to the composting and recycling facilities should be paid more attention.

  3. Facility Management as Part of an Integrated Design of Civil Engineering Structures

    NASA Astrophysics Data System (ADS)

    Hyben, Ivan; Podmanický, Peter

    2014-11-01

    The present article deals about facility management, as still relatively young component of an integrated planning and design of buildings. Attention is focused on the area of the proposal, which can greatly affect to amount of future operating costs. Operational efficiency has been divided into individual components and satisfaction with the solution of buildings already constructed was assessed by workers, who are actually dedicated facility management in these organizations. The results were then assessed and evaluated through regression analysis. The aim of this paper is to determine to what extent is desired update project documentation of new buildings from the perspective of facility management.

  4. EPA Facility Registry Service (FRS): OIL

    EPA Pesticide Factsheets

    This dataset contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Oil database. The Oil database contains information on Spill Prevention, Control, and Countermeasure (SPCC) and Facility Response Plan (FRP) subject facilities to prevent and respond to oil spills. FRP facilities are referred to as substantial harm facilities due to the quantities of oil stored and facility characteristics. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to Oil facilities once the Oil data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  5. The Legnaro-Padova distributed Tier-2: challenges and results

    NASA Astrophysics Data System (ADS)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.

  6. Review of integrated digital systems: evolution and adoption

    NASA Astrophysics Data System (ADS)

    Fritz, Lawrence W.

    The factors that are influencing the evolution of photogrammetric and remote sensing technology to transition into fully integrated digital systems are reviewed. These factors include societal pressures for new, more timely digital products from the Spatial Information Sciencesand the adoption of rapid technological advancements in digital processing hardware and software. Current major developments in leading government mapping agencies of the USA, such as the Digital Production System (DPS) modernization programme at the Defense Mapping Agency, and the Automated Nautical Charting System II (ANCS-II) programme and Integrated Digital Photogrammetric Facility (IDPF) at NOAA/National Ocean Service, illustrate the significant benefits to be realized. These programmes are examples of different levels of integrated systems that have been designed to produce digital products. They provide insights to the management complexities to be considered for very large integrated digital systems. In recognition of computer industry trends, a knowledge-based architecture for managing the complexity of the very large spatial information systems of the future is proposed.

  7. PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities

    PubMed Central

    2011-01-01

    Background Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. Results The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. Conclusions PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/. PMID:21385349

  8. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  9. Development and validation of the crew-station system-integration research facility

    NASA Technical Reports Server (NTRS)

    Nedell, B.; Hardy, G.; Lichtenstein, T.; Leong, G.; Thompson, D.

    1986-01-01

    The various issues associated with the use of integrated flight management systems in aircraft were discussed. To address these issues a fixed base integrated flight research (IFR) simulation of a helicopter was developed to support experiments that contribute to the understanding of design criteria for rotorcraft cockpits incorporating advanced integrated flight management systems. A validation experiment was conducted that demonstrates the main features of the facility and the capability to conduct crew/system integration research.

  10. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  11. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  12. Los Alamos Plutonium Facility Waste Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, K.; Montoya, A.; Wieneke, R.

    1997-02-01

    This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facilitymore » on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process.« less

  13. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  14. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  15. Annual ADP planning document

    NASA Technical Reports Server (NTRS)

    Mogilevsky, M.

    1973-01-01

    The Category A computer systems at KSC (Al and A2) which perform scientific and business/administrative operations are described. This data division is responsible for scientific requirements supporting Saturn, Atlas/Centaur, Titan/Centaur, Titan III, and Delta vehicles, and includes realtime functions, Apollo-Soyuz Test Project (ASTP), and the Space Shuttle. The work is performed chiefly on the GEL-635 (Al) system located in the Central Instrumentation Facility (CIF). The Al system can perform computations and process data in three modes: (1) real-time critical mode; (2) real-time batch mode; and (3) batch mode. The Division's IBM-360/50 (A2) system, also at the CIF, performs business/administrative data processing such as personnel, procurement, reliability, financial management and payroll, real-time inventory management, GSE accounting, preventive maintenance, and integrated launch vehicle modification status.

  16. Information for Child Care Providers about Pesticides/Integrated Pest Management

    EPA Pesticide Factsheets

    Learn about pesticides/integrated pest management, the health effects associated with exposure to pests and pesticides, and the steps that can be taken to use integrated pest management strategies in childcare facilities.

  17. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  18. Comparison of immersed liquid and air cooling of NASA's Airborne Information Management System

    NASA Technical Reports Server (NTRS)

    Hoadley, A. W.; Porter, A. J.

    1992-01-01

    The Airborne Information Management System (AIMS) is currently under development at NASA Dryden Flight Research Facility. The AIMS is designed as a modular system utilizing surface mounted integrated circuits in a high-density configuration. To maintain the temperature of the integrated circuits within manufacturer's specifications, the modules are to be filled with Fluorinert FC-72. Unlike ground based liquid cooled computers, the extreme range of the ambient pressures experienced by the AIMS requires the FC-72 be contained in a closed system. This forces the latent heat absorbed during the boiling to be released during the condensation that must take within the closed module system. Natural convection and/or pumping carries the heat to the outer surface of the AIMS module where the heat transfers to the ambient air. This paper will present an evaluation of the relative effectiveness of immersed liquid cooling and air cooling of the Airborne Information Management System.

  19. Comparison of immersed liquid and air cooling of NASA's Airborne Information Management System

    NASA Astrophysics Data System (ADS)

    Hoadley, A. W.; Porter, A. J.

    1992-07-01

    The Airborne Information Management System (AIMS) is currently under development at NASA Dryden Flight Research Facility. The AIMS is designed as a modular system utilizing surface mounted integrated circuits in a high-density configuration. To maintain the temperature of the integrated circuits within manufacturer's specifications, the modules are to be filled with Fluorinert FC-72. Unlike ground based liquid cooled computers, the extreme range of the ambient pressures experienced by the AIMS requires the FC-72 be contained in a closed system. This forces the latent heat absorbed during the boiling to be released during the condensation that must take within the closed module system. Natural convection and/or pumping carries the heat to the outer surface of the AIMS module where the heat transfers to the ambient air. This paper will present an evaluation of the relative effectiveness of immersed liquid cooling and air cooling of the Airborne Information Management System.

  20. Professional Development through Organizational Assessment: Using APPA's Facilities Management Evaluation Program

    ERIC Educational Resources Information Center

    Medlin, E. Lander; Judd, R. Holly

    2013-01-01

    APPA's Facilities Management Evaluation Program (FMEP) provides an integrated system to optimize organizational performance. The criteria for evaluation not only provide a tool for organizational continuous improvement, they serve as a compelling leadership development tool essential for today's facilities management professional. The senior…

  1. Data management integration for biomedical core facilities

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Qiang; Szymanski, Jacek; Wilson, David

    2007-03-01

    We present the design, development, and pilot-deployment experiences of MIMI, a web-based, Multi-modality Multi-Resource Information Integration environment for biomedical core facilities. This is an easily customizable, web-based software tool that integrates scientific and administrative support for a biomedical core facility involving a common set of entities: researchers; projects; equipments and devices; support staff; services; samples and materials; experimental workflow; large and complex data. With this software, one can: register users; manage projects; schedule resources; bill services; perform site-wide search; archive, back-up, and share data. With its customizable, expandable, and scalable characteristics, MIMI not only provides a cost-effective solution to the overarching data management problem of biomedical core facilities unavailable in the market place, but also lays a foundation for data federation to facilitate and support discovery-driven research.

  2. Enhancements to the Network Repair Level Analysis (NRLA) Model Using Marginal Analysis Techniques and Centralized Intermediate Repair Facility (CIRF) Maintenance Concepts.

    DTIC Science & Technology

    1983-12-01

    while at the same time improving its operational efficiency. Through their integration and use, System Program Managers have a comprehensive analytical... systems . The NRLA program is hosted on the CREATE Operating System and contains approxiamately 5500 lines of computer code. It consists of a main...associated with C alternative maintenance plans. As the technological complexity of weapons systems has increased new and innovative logisitcal support

  3. EPA Facility Registry Service (FRS): CAMDBS

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Clean Air Markets Division Business System (CAMDBS). Administered by the EPA Clean Air Markets Division, within the Office of Air and Radiation, CAMDBS supports the implementation of market-based air pollution control programs, including the Acid Rain Program and regional programs designed to reduce the transport of ozone. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to CAMDBS facilities once the CAMDBS data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  4. Integrated Facilities Management and Fixed Asset Accounting.

    ERIC Educational Resources Information Center

    Golz, W. C., Jr.

    1984-01-01

    A record of a school district's assets--land, buildings, machinery, and equipment--can be a useful management tool that meets accounting requirements and provides appropriate information for budgeting, forecasting, and facilities management. (MLF)

  5. Capacity planning for electronic waste management facilities under uncertainty: multi-objective multi-time-step model development.

    PubMed

    Poonam Khanijo Ahluwalia; Nema, Arvind K

    2011-07-01

    Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).

  6. Integrated waste management system costs in a MPC system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supko, E.M.

    1995-12-01

    The impact on system costs of including a centralized interim storage facility as part of an integrated waste management system based on multi-purpose canister (MPC) technology was assessed in analyses by Energy Resources International, Inc. A system cost savings of $1 to $2 billion occurs if the Department of Energy begins spent fuel acceptance in 1998 at a centralized interim storage facility. That is, the savings associated with decreased utility spent fuel management costs will be greater than the cost of constructing and operating a centralized interim storage facility.

  7. EPA Facility Registry System (FRS): NCES

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Center for Education Statistics (NCES). The primary federal database for collecting and analyzing data related to education in the United States and other Nations, NCES is located in the U.S. Department of Education, within the Institute of Education Sciences. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA00e2??s national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to NCES school facilities once the NCES data has been integrated into the FRS database. Additional information on FRS is available at the EPA website http://www.epa.gov/enviro/html/fii/index.html.

  8. Integrating Computational Chemistry into the Physical Chemistry Curriculum

    ERIC Educational Resources Information Center

    Johnson, Lewis E.; Engel, Thomas

    2011-01-01

    Relatively few undergraduate physical chemistry programs integrate molecular modeling into their quantum mechanics curriculum owing to concerns about limited access to computational facilities, the cost of software, and concerns about increasing the course material. However, modeling exercises can be integrated into an undergraduate course at a…

  9. Integrated water management system - Description and test results. [for Space Station waste water processing

    NASA Technical Reports Server (NTRS)

    Elden, N. C.; Winkler, H. E.; Price, D. F.; Reysa, R. P.

    1983-01-01

    Water recovery subsystems are being tested at the NASA Lyndon B. Johnson Space Center for Space Station use to process waste water generated from urine and wash water collection facilities. These subsystems are being integrated into a water management system that will incorporate wash water and urine processing through the use of hyperfiltration and vapor compression distillation subsystems. Other hardware in the water management system includes a whole body shower, a clothes washing facility, a urine collection and pretreatment unit, a recovered water post-treatment system, and a water quality monitor. This paper describes the integrated test configuration, pertinent performance data, and feasibility and design compatibility conclusions of the integrated water management system.

  10. IEDA: Making Small Data BIG Through Interdisciplinary Partnerships Among Long-tail Domains

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Carbotte, S. M.; Arko, R. A.; Ferrini, V. L.; Hsu, L.; Song, L.; Ghiorso, M. S.; Walker, D. J.

    2014-12-01

    The Big Data world in the Earth Sciences so far exists primarily for disciplines that generate massive volumes of observational or computed data using large-scale, shared instrumentation such as global sensor networks, satellites, or high-performance computing facilities. These data are typically managed and curated by well-supported community data facilities that also provide the tools for exploring the data through visualization or statistical analysis. In many other domains, especially those where data are primarily acquired by individual investigators or small teams (known as 'Long-tail data'), data are poorly shared and integrated, lacking a community-based data infrastructure that ensures persistent access, quality control, standardization, and integration of data, as well as appropriate tools to fully explore and mine the data within the context of broader Earth Science datasets. IEDA (Integrated Earth Data Applications, www.iedadata.org) is a data facility funded by the US NSF to develop and operate data services that support data stewardship throughout the full life cycle of observational data in the solid earth sciences, with a focus on the data management needs of individual researchers. IEDA builds on a strong foundation of mature disciplinary data systems for marine geology and geophysics, geochemistry, and geochronology. These systems have dramatically advanced data resources in those long-tail Earth science domains. IEDA has strengthened these resources by establishing a consolidated, enterprise-grade infrastructure that is shared by the domain-specific data systems, and implementing joint data curation and data publication services that follow community standards. In recent years, other domain-specific data efforts have partnered with IEDA to take advantage of this infrastructure and improve data services to their respective communities with formal data publication, long-term preservation of data holdings, and better sustainability. IEDA hopes to foster such partnerships with streamlined data services, including user-friendly, single-point interfaces for data submission, discovery, and access across the partner systems to support interdisciplinary science.

  11. The Cloud Area Padovana: from pilot to production

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.

    2017-10-01

    The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less

  13. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  14. Progress in aeronautical research and technology applicable to civil air transports

    NASA Technical Reports Server (NTRS)

    Bower, R. E.

    1981-01-01

    Recent progress in the aeronautical research and technology program being conducted by the United States National Aeronautics and Space Administration is discussed. Emphasis is on computational capability, new testing facilities, drag reduction, turbofan and turboprop propulsion, noise, composite materials, active controls, integrated avionics, cockpit displays, flight management, and operating problems. It is shown that this technology is significantly impacting the efficiency of the new civil air transports. The excitement of emerging research promises even greater benefits to future aircraft developments.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Edward J., Jr.; Henry, Karen Lynne

    Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facilitymore » assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.« less

  16. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  17. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  18. The Design of PSB-VVER Experiments Relevant to Accident Management

    NASA Astrophysics Data System (ADS)

    Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander

    Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.

  19. Simplifying Facility and Event Scheduling: Saving Time and Money.

    ERIC Educational Resources Information Center

    Raasch, Kevin

    2003-01-01

    Describes a product called the Event Management System (EMS), a computer software program to manage facility and event scheduling. Provides example of the school district and university uses of EMS. Describes steps in selecting a scheduling-management system. (PKP)

  20. EPA Facility Registry Service (FRS): RCRA

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of hazardous waste facilities that link to the Resource Conservation and Recovery Act Information System (RCRAInfo). EPA's comprehensive information system in support of the Resource Conservation and Recovery Act (RCRA) of 1976 and the Hazardous and Solid Waste Amendments (HSWA) of 1984, RCRAInfo tracks many types of information about generators, transporters, treaters, storers, and disposers of hazardous waste. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to RCRAInfo hazardous waste facilities once the RCRAInfo data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs

  1. DKIST facility management system integration

    NASA Astrophysics Data System (ADS)

    White, Charles R.; Phelps, LeEllen

    2016-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Observatory is under construction at Haleakalā, Maui, Hawai'i. When complete, the DKIST will be the largest solar telescope in the world. The Facility Management System (FMS) is a subsystem of the high-level Facility Control System (FCS) and directly controls the Facility Thermal System (FTS). The FMS receives operational mode information from the FCS while making process data available to the FCS and includes hardware and software to integrate and control all aspects of the FTS including the Carousel Cooling System, the Telescope Chamber Environmental Control Systems, and the Temperature Monitoring System. In addition it will integrate the Power Energy Management System and several service systems such as heating, ventilation, and air conditioning (HVAC), the Domestic Water Distribution System, and the Vacuum System. All of these subsystems must operate in coordination to provide the best possible observing conditions and overall building management. Further, the FMS must actively react to varying weather conditions and observational requirements. The physical impact of the facility must not interfere with neighboring installations while operating in a very environmentally and culturally sensitive area. The FMS system will be comprised of five Programmable Automation Controllers (PACs). We present a pre-build overview of the functional plan to integrate all of the FMS subsystems.

  2. Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugent, Peter E.; Simonson, J. Michael

    2011-10-24

    This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues inmore » the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data« less

  3. EPA Facility Registry Service (FRS): PCS_NPDES

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Permit Compliance System (PCS) or the National Pollutant Discharge Elimination System (NPDES) module of the Integrated Compliance Information System (ICIS). PCS tracks NPDES surface water permits issued under the Clean Water Act. This system is being incrementally replaced by the NPDES module of ICIS. Under NPDES, all facilities that discharge pollutants from any point source into waters of the United States are required to obtain a permit. The permit will likely contain limits on what can be discharged, impose monitoring and reporting requirements, and include other provisions to ensure that the discharge does not adversely affect water quality. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to NPDES facilities once the PCS or ICIS-NPDES data has been integrated into the FRS database. Additional information on FRS is available

  4. Bringing the Pieces Together – Placing Core Facilities at the Core of Universities and Institutions: Lessons from Mergers, Acquisitions and Consolidations

    PubMed Central

    Mundoma, Claudius

    2013-01-01

    As organizations expand and grow, the core facilities have become more dispersed disconnected. This is happening at a time when collaborations within the organization is a driver to increased productivity. Stakeholders are looking at the best way to bring the pieces together. It is inevitable that core facilities at universities and research institutes have to be integrated in order to streamline services and facilitate ease of collaboration. The path to integration often goes through consolidation, merging and shedding of redundant services. Managing this process requires a delicate coordination of two critical factors: the human (lab managers) factor and the physical assets factor. Traditionally more emphasis has been placed on reorganizing the physical assets without paying enough attention to the professionals who have been managing the assets for years, if not decades. The presentation focuses on how a systems approach can be used to effect a smooth core facility integration process. Managing the human element requires strengthening existing channels of communication and if necessary, creating new ones throughout the organization to break cultural and structural barriers. Managing the physical assets requires a complete asset audit and this requires direct input from the administration as well as the facility managers. Organizations can harness the power of IT to create asset visibility. Successfully managing the physical assets and the human assets increases productivity and efficiency within the organization.

  5. EPA FRS Facilities Combined File CSV Download for the Marshall Islands

    EPA Pesticide Factsheets

    The Facility Registry System (FRS) identifies facilities, sites, or places subject to environmental regulation or of environmental interest to EPA programs or delegated states. Using vigorous verification and data management procedures, FRS integrates facility data from program national systems, state master facility records, tribal partners, and other federal agencies and provides the Agency with a centrally managed, single source of comprehensive and authoritative information on facilities.

  6. EPA FRS Facilities Single File CSV Download for the Marshall Islands

    EPA Pesticide Factsheets

    The Facility Registry System (FRS) identifies facilities, sites, or places subject to environmental regulation or of environmental interest to EPA programs or delegated states. Using vigorous verification and data management procedures, FRS integrates facility data from program national systems, state master facility records, tribal partners, and other federal agencies and provides the Agency with a centrally managed, single source of comprehensive and authoritative information on facilities.

  7. THE COMPUTER AS A MANAGEMENT TOOL--PHYSICAL FACILITIES INVENTORIES, UTILIZATION, AND PROJECTIONS. 11TH ANNUAL MACHINE RECORDS CONFERENCE PROCEEDINGS (UNIVERSITY OF TENNESSEE, KNOXVILLE, APRIL 25-27, 1966).

    ERIC Educational Resources Information Center

    WITMER, DAVID R.

    WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…

  8. Microcosm to Cosmos: The Growth of a Divisional Computer Network

    PubMed Central

    Johannes, R.S.; Kahane, Stephen N.

    1987-01-01

    In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.

  9. Medical informatics--an Australian perspective.

    PubMed

    Hannan, T

    1991-06-01

    Computers, like the X-ray and stethoscope can be seen as clinical tools, that provide physicians with improved expertise in solving patient management problems. As tools they enable us to extend our clinical information base, and they also provide facilities that improve the delivery of the health care we provide. Automation (computerisation) in the health domain will cause the computer to become a more integral part of health care management and delivery before the start of the next century. To understand how the computer assists those who deliver and manage health care, it is important to be aware of its functional capabilities and how we can use them in medical practice. The rapid technological advances in computers over the last two decades has had both beneficial and counterproductive effects on the implementation of effective computer applications in the delivery of health care. For example, in the 1990s the computer hobbyist is able to make an investment of less than $10,000 on computer hardware that will match or exceed the technological capacities of machines of the 1960s. These rapid technological advances, which have produced a quantum leap in our ability to store and process information, have tended to make us overlook the need for effective computer programmes which will meet the needs of patient care. As the 1990s begin, those delivering health care (eg, physicians, nurses, pharmacists, administrators ...) need to become more involved in directing the effective implementation of computer applications that will provide the tools for improved information management, knowledge processing, and ultimately better patient care.

  10. [Development of fixed-base full task space flight training simulator].

    PubMed

    Xue, Liang; Chen, Shan-quang; Chang, Tian-chun; Yang, Hong; Chao, Jian-gang; Li, Zhi-peng

    2003-01-01

    Fixed-base full task flight training simulator is a very critical and important integrated training facility. It is mostly used in training of integrated skills and tasks, such as running the flight program of manned space flight, dealing with faults, operating and controlling spacecraft flight, communicating information between spacecraft and ground. This simulator was made up of several subentries including spacecraft simulation, simulating cabin, sight image, acoustics, main controlling computer, instructor and assistant support. It has implemented many simulation functions, such as spacecraft environment, spacecraft movement, communicating information between spacecraft and ground, typical faults, manual control and operating training, training control, training monitor, training database management, training data recording, system detecting and so on.

  11. Estimating the cost of referral and willingness to pay for referral to higher-level health facilities: a case series study from an integrated community case management programme in Uganda.

    PubMed

    Nanyonjo, Agnes; Bagorogoza, Benson; Kasteng, Frida; Ayebale, Godfrey; Makumbi, Fredrick; Tomson, Göran; Källander, Karin

    2015-08-28

    Integrated community case management (iCCM) relies on community health workers (CHWs) managing children with malaria, pneumonia, diarrhoea, and referring children when management is not possible. This study sought to establish the cost per sick child referred to seek care from a higher-level health facility by a CHW and to estimate caregivers' willingness to pay (WTP) for referral. Caregivers of 203 randomly selected children referred to higher-level health facilities by CHWs were interviewed in four Midwestern Uganda districts. Questionnaires and document reviews were used to capture direct, indirect and opportunity costs incurred by caregivers, CHWs and health facilities managing referred children. WTP for referral was assessed through the 'bidding game' approach followed by an open-ended question on maximum WTP. Descriptive analysis was conducted for factors associated with referral completion and WTP using logistic and linear regression methods, respectively. The cost per case referred to higher-level health facilities was computed from a societal perspective. Reasons for referral included having fever with a negative malaria test (46.8%), danger signs (29.6%) and drug shortage (37.4%). Among the referred, less than half completed referral (45.8%). Referral completion was 2.8 times higher among children with danger signs (p = 0.004) relative to those without danger signs, and 0.27 times lower among children who received pre-referral treatment (p < 0.001). The average cost per case referred was US$ 4.89 and US$7.35 per case completing referral. For each unit cost per case referred, caregiver out of pocket expenditure contributed 33.7%, caregivers' and CHWs' opportunity costs contributed 29.2% and 5.1% respectively and health facility costs contributed 39.6%. The mean (SD) out of pocket expenditure was US$1.65 (3.25). The mean WTP for referral was US$8.25 (14.70) and was positively associated with having received pre-referral treatment, completing referral and increasing caregiver education level. The mean WTP for referral was higher than the average out of pocket expenditure. This, along with suboptimal referral completion, points to barriers in access to higher-level facilities as the primary cause of low referral. Community mobilisation for uptake of referral is necessary if the policy of referring children to the nearest health facility is to be effective.

  12. CINT - Center for Integrated Nanotechnologies

    Science.gov Websites

    Skip to Content Skip to Search Skip to Utility Navigation Skip to Top Navigation Search Site submit Facilities Discovery Platform Integration Lab User Facilities LUMOS Research Science Thrusts Integration Challenges Accepted User Proposals Data Management Becoming a User Call for Proposals Proposal Guidelines

  13. Why the Petascale era will drive improvements in the management of the full lifecycle of earth science data.

    NASA Astrophysics Data System (ADS)

    Wyborn, L.

    2012-04-01

    The advent of the petascale era, in both storage and compute facilities, will offer new opportunities for earth scientists to transform the way they do their science and to undertake cross-disciplinary science at a global scale. No longer will data have to be averaged and subsampled: it can be analysed to its fullest resolution at national or even global scales. Much larger data volumes can be analysed in single passes and at higher resolution: large scale cross domain science is now feasible. However, in general, earth sciences have been slow to capitalise on the potential of these new petascale compute facilities: many struggle to even use terascale facilities. Our chances of using these new facilities will require a vast improvement in the management of the full life cycle of data: in reality it will need to be transformed. Many of our current issues with earth science data are historic and stem from the limitations of early data storage systems. As storage was so expensive, metadata was usually stored separate from the data and attached as a readme file. Likewise, attributes that defined uncertainty, reliability and traceability were recoded in lab note books and rarely stored with the data. Data were routinely transferred as files. The new opportunities require that the traditional discover, display and locally download and process paradigm is too limited. For data access and assimilation to be improved, data will need to be self describing. For heterogeneous data to be rapidly integrated attributes such as reliability, uncertainty and traceability will need to be systematically recorded with each observation. The petascale era also requires that individual data files be transformed and aggregated into calibrated data arrays or data cubes. Standards become critical and are the enablers of integration. These changes are common to almost every science discipline. What makes earth sciences unique is that many domains record time series data, particularly in the environmental geosciences areas (weathering, soil changes, climate change). The data life cycle will be measured in decades and centuries, not years. Preservation over such time spans is quite a challenge to the earth sciences as data will have to be managed over many evolutions of software and hardware. The focus has to be on managing the data and not the media. Currently storage is not an issue, but it is predicted that data volumes will soon exceed the effective storage media than can be physically manufactured. This means that organisations will have to think about disposal and destruction of data. For earth sciences, this will be a particularly sensitive issue. Petascale computing offers many new opportunities to the earth sciences and by 2020 exascale computers will be a reality. To fully realise these opportunities the earth sciences needs to actively and systematically rethink what the ramifications of these new systems will have on current practices for data storage, discovery, access and assimilation.

  14. Microgrid Controls | Grid Modernization | NREL

    Science.gov Websites

    Systems Integration Facility. Microgrid Controller Interaction with Distribution Management Systems This project investigates the interaction of distribution management systems with local controllers, including microgrid controllers. The project is developing integrated control and management systems for distribution

  15. Investigating Uncertainty and Sensitivity in Integrated, Multimedia Environmental Models: Tools for FRAMES-3MRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babendreier, Justin E.; Castleton, Karl J.

    2005-08-01

    Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRAmore » modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .« less

  16. Development of Integrated Programs for Aerospace-vehicle design (IPAD): Integrated information processing requirements

    NASA Technical Reports Server (NTRS)

    Southall, J. W.

    1979-01-01

    The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.

  17. Display Sharing: An Alternative Paradigm

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  18. Clinician perceptions and patient experiences of antiretroviral treatment integration in primary health care clinics, Tshwane, South Africa.

    PubMed

    Mathibe, Maphuthego D; Hendricks, Stephen J H; Bergh, Anne-Marie

    2015-10-02

    Primary Health Care (PHC) clinicians and patients are major role players in the South African antiretroviral treatment programme. Understanding their perceptions and experiences of integrated care and the management of people living with HIV and AIDS in PHC facilities is necessary for successful implementation and sustainability of integration. This study explored clinician perceptions and patient experiences of integration of antiretroviral treatment in PHC clinics. An exploratory, qualitative study was conducted in four city of Tshwane PHC facilities. Two urban and two rural facilities following different models of integration were included. A self-administered questionnaire with open-ended items was completed by 35 clinicians and four focus group interviews were conducted with HIV-positive patients. The data were coded and categories were grouped into sub-themes and themes. Workload, staff development and support for integration affected clinicians' performance and viewpoints. They perceived promotion of privacy, reduced discrimination and increased access to comprehensive care as benefits of service integration. Delays, poor patient care and patient dissatisfaction were viewed as negative aspects of integration. In three facilities patients were satisfied with integration or semi-integration and felt common queues prevented stigma and discrimination, whilst the reverse was true in the facility with separate services. Single-month issuance of antiretroviral drugs and clinic schedule organisation was viewed negatively, as well as poor staff attitudes, poor communication and long waiting times. Although a fully integrated service model is preferable, aspects that need further attention are management support from health authorities for health facilities, improved working conditions and appropriate staff development opportunities.

  19. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  20. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  1. Post-basic nursing students' access to and attitudes toward the use of information technology in practice: a descriptive analysis.

    PubMed

    Nkosi, Z Z; Asah, F; Pillay, P

    2011-10-01

    Nurses are exposed to the changing demands in technology as they execute their patient-related duties in the workplace. Integration of Information Technology (IT) in healthcare systems improves the quality of care provided. Nursing students with prior exposure to computers tend to have a positive influence IT. A descriptive study design using a quantitative approach and structured questionnaire was used to measure the nurses' attitudes towards computer usage. A census of 45 post-basic first year nursing management students were participated in this study. The students demonstrated a positive attitude towards the use of a computer. But access to and use of a computer and IT was limited and nurses in clinics had no access to IT. A lack of computer skills was identified as a factor that hinders access to IT. Nursing students agreed that computer literacy should be included in the curriculum to allow them to become independent computer users. The Department of Health should have IT in all health-care facilities and also train all health-care workers to use IT. With the positive attitudes expressed by the students, nurse managers need to create a conducive environment to ensure such a positive attitude continues to excel. © 2011 Blackwell Publishing Ltd.

  2. Space shuttle program: Shuttle Avionics Integration Laboratory. Volume 7: Logistics management plan

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The logistics management plan for the shuttle avionics integration laboratory defines the organization, disciplines, and methodology for managing and controlling logistics support. Those elements requiring management include maintainability and reliability, maintenance planning, support and test equipment, supply support, transportation and handling, technical data, facilities, personnel and training, funding, and management data.

  3. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  4. MIMI: multimodality, multiresource, information integration environment for biomedical core facilities.

    PubMed

    Szymanski, Jacek; Wilson, David L; Zhang, Guo-Qiang

    2009-10-01

    The rapid expansion of biomedical research has brought substantial scientific and administrative data management challenges to modern core facilities. Scientifically, a core facility must be able to manage experimental workflow and the corresponding set of large and complex scientific data. It must also disseminate experimental data to relevant researchers in a secure and expedient manner that facilitates collaboration and provides support for data interpretation and analysis. Administratively, a core facility must be able to manage the scheduling of its equipment and to maintain a flexible and effective billing system to track material, resource, and personnel costs and charge for services to sustain its operation. It must also have the ability to regularly monitor the usage and performance of its equipment and to provide summary statistics on resources spent on different categories of research. To address these informatics challenges, we introduce a comprehensive system called MIMI (multimodality, multiresource, information integration environment) that integrates the administrative and scientific support of a core facility into a single web-based environment. We report the design, development, and deployment experience of a baseline MIMI system at an imaging core facility and discuss the general applicability of such a system in other types of core facilities. These initial results suggest that MIMI will be a unique, cost-effective approach to addressing the informatics infrastructure needs of core facilities and similar research laboratories.

  5. EPA Facility Registry Service (FRS): ICIS

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Integrated Compliance Information System (ICIS). When complete, ICIS will provide a database that will contain integrated enforcement and compliance information across most of EPA's programs. The vision for ICIS is to replace EPA's independent databases that contain enforcement data with a single repository for that information. Currently, ICIS contains all Federal Administrative and Judicial enforcement actions and a subset of the Permit Compliance System (PCS), which supports the National Pollutant Discharge Elimination System (NPDES). ICIS exchanges non-sensitive enforcement/compliance activities, non-sensitive formal enforcement actions and NPDES information with FRS. This web feature service contains the enforcement/compliance activities and formal enforcement action related facilities; the NPDES facilities are contained in the PCS_NPDES web feature service. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on f

  6. The 10 MWe solar thermal central receiver pilot plant solar facilities design integration, RADL item 1-10

    NASA Astrophysics Data System (ADS)

    1980-07-01

    Accomplishments are reported in the areas of: program management, system integration, the beam characterization system, receiver unit, thermal storage subsystems, master control system, plant support subsystem and engineering services. A solar facilities design integration program action items update is included. Work plan changes and cost underruns are discussed briefly. (LEW)

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory's Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called Grand Challenge'' problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory`s Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called ``Grand Challenge`` problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  9. ESIF 2016: Modernizing Our Grid and Energy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Becelaere, Kimberly

    This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.

  10. Comprehensive integrated planning: A process for the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-05-01

    The Oak Ridge Comprehensive Integrated Plan is intended to assist the US Department of Energy (DOE) and contractor personnel in implementing a comprehensive integrated planning process consistent with DOE Order 430.1, Life Cycle Asset Management and Oak Ridge Operations Order 430. DOE contractors are charged with developing and producing the Comprehensive Integrated Plan, which serves as a summary document, providing information from other planning efforts regarding vision statements, missions, contextual conditions, resources and facilities, decision processes, and stakeholder involvement. The Comprehensive Integrated Plan is a planning reference that identifies primary issues regarding major changes in land and facility use andmore » serves all programs and functions on-site as well as the Oak Ridge Operations Office and DOE Headquarters. The Oak Ridge Reservation is a valuable national resource and is managed on the basis of the principles of ecosystem management and sustainable development and how mission, economic, ecological, social, and cultural factors are used to guide land- and facility-use decisions. The long-term goals of the comprehensive integrated planning process, in priority order, are to support DOE critical missions and to stimulate the economy while maintaining a quality environment.« less

  11. Optically-based Sensor System for Critical Nuclear Facilities Post-Event Seismic Structural Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCallen, David; Petrone, Floriana; Buckle, Ian

    The U.S. Department of Energy (DOE) has ownership and operational responsibility for a large enterprise of nuclear facilities that provide essential functions to DOE missions ranging from national security to discovery science and energy research. These facilities support a number of DOE programs and offices including the National Nuclear Security Administration, Office of Science, and Office of Environmental Management. With many unique and “one of a kind” functions, these facilities represent a tremendous national investment, and assuring their safety and integrity is fundamental to the success of a breadth of DOE programs. Many DOE critical facilities are located in regionsmore » with significant natural phenomenon hazards including major earthquakes and DOE has been a leader in developing standards for the seismic analysis of nuclear facilities. Attaining and sustaining excellence in nuclear facility design and management must be a core competency of the DOE. An important part of nuclear facility management is the ability to monitor facilities and rapidly assess the response and integrity of the facilities after any major upset event. Experience in the western U.S. has shown that understanding facility integrity after a major earthquake is a significant challenge which, lacking key data, can require extensive effort and significant time. In the work described in the attached report, a transformational approach to earthquake monitoring of facilities is described and demonstrated. An entirely new type of optically-based sensor that can directly and accurately measure the earthquake-induced deformations of a critical facility has been developed and tested. This report summarizes large-scale shake table testing of the sensor concept on a representative steel frame building structure, and provides quantitative data on the accuracy of the sensor measurements.« less

  12. Integrated instrumentation & computation environment for GRACE

    NASA Astrophysics Data System (ADS)

    Dhekne, P. S.

    2002-03-01

    The project GRACE (Gamma Ray Astrophysics with Coordinated Experiments) aims at setting up a state of the art Gamma Ray Observatory at Mt. Abu, Rajasthan for undertaking comprehensive scientific exploration over a wide spectral window (10's keV - 100's TeV) from a single location through 4 coordinated experiments. The cumulative data collection rate of all the telescopes is expected to be about 1 GB/hr, necessitating innovations in the data management environment. As real-time data acquisition and control as well as off-line data processing, analysis and visualization environment of these systems is based on the us cutting edge and affordable technologies in the field of computers, communications and Internet. We propose to provide a single, unified environment by seamless integration of instrumentation and computations by taking advantage of the recent advancements in Web based technologies. This new environment will allow researchers better acces to facilities, improve resource utilization and enhance collaborations by having identical environments for online as well as offline usage of this facility from any location. We present here a proposed implementation strategy for a platform independent web-based system that supplements automated functions with video-guided interactive and collaborative remote viewing, remote control through virtual instrumentation console, remote acquisition of telescope data, data analysis, data visualization and active imaging system. This end-to-end web-based solution will enhance collaboration among researchers at the national and international level for undertaking scientific studies, using the telescope systems of the GRACE project.

  13. Design for perception management system on offshore reef based on integrated management

    NASA Astrophysics Data System (ADS)

    Peng, Li; Qiankun, Wang

    2017-06-01

    According to an analysis of actual monitoring demands using integrated management and information technology, a quad monitoring system is proposed to provide intelligent perception of offshore reefs, including indoor building environments, architectural structures, and facilities and perimeter integrity. This will strengthen the ability to analyse and evaluate offshore reef operation and health, promoting efficiency in decision making.

  14. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  15. GHG emission control and solid waste management for megacities with inexact inputs: a case study in Beijing, China.

    PubMed

    Lu, Hongwei; Sun, Shichao; Ren, Lixia; He, Li

    2015-03-02

    This study advances an integrated MSW management model under inexact input information for the city of Beijing, China. The model is capable of simultaneously generating MSW management policies, performing GHG emission control, and addressing system uncertainty. Results suggest that: (1) a management strategy with minimal system cost can be obtained even when suspension of certain facilities becomes unavoidable through specific increments of the remaining ones; (2) expansion of facilities depends only on actual needs, rather than enabling the full usage of existing facilities, although it may prove to be a costly proposition; (3) adjustment of waste-stream diversion ratio directly leads to a change in GHG emissions from different disposal facilities. Results are also obtained from the comparison of the model with a conventional one without GHG emissions consideration. It is indicated that (1) the model would reduce the net system cost by [45, 61]% (i.e., [3173, 3520] million dollars) and mitigate GHG emissions by [141, 179]% (i.e., [76, 81] million tons); (2) increased waste would be diverted to integrated waste management facilities to prevent overmuch CH4 emission from the landfills. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. FAA computer security : concerns remain due to personnel and other continuing weaknesses

    DOT National Transportation Integrated Search

    2000-08-01

    FAA has a history of computer security weaknesses in a number of areas, including its physical security management at facilities that house air traffic control (ATC) systems, systems security for both operational and future systems, management struct...

  17. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  18. Dual-Use Aspects of System Health Management

    NASA Technical Reports Server (NTRS)

    Owens, P. R.; Jambor, B. J.; Eger, G. W.; Clark, W. A.

    1994-01-01

    System Health Management functionality is an essential part of any space launch system. Health management functionality is an integral part of mission reliability, since it is needed to verify the reliability before the mission starts. Health Management is also a key factor in life cycle cost reduction and in increasing system availability. The degree of coverage needed by the system and the degree of coverage made available at a reasonable cost are critical parameters of a successful design. These problems are not unique to the launch vehicle world. In particular, the Intelligent Vehicle Highway System, commercial aircraft systems, train systems, and many types of industrial production facilities require various degrees of system health management. In all of these applications, too, the designers must balance the benefits and costs of health management in order to optimize costs. The importance of an integrated system is emphasized. That is, we present the case for considering health management as an integral part of system design, rather than functionality to be added on at the end of the design process. The importance of maintaining the system viewpoint is discussed in making hardware and software tradeoffs and in arriving at design decisions. We describe an approach to determine the parameters to be monitored in any system health management application. This approach is based on Design of Experiments (DOE), prototyping, failure modes and effects analyses, cost modeling and discrete event simulation. The various computer-based tools that facilitate the approach are discussed. The approach described originally was used to develop a fault tolerant avionics architecture for launch vehicles that incorporated health management as an integral part of the system. Finally, we discuss generalizing the technique to apply it to other domains. Several illustrations are presented.

  19. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  20. Integration of Titan supercomputer at OLCF with ATLAS Production System

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  1. EPA Facility Registry System (FRS): NEPT

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Environmental Performance Track (NEPT) Program dataset. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs

  2. EPA Facility Registry Service (FRS): NEI

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the National Emissions Inventory (NEI) Program dataset. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs

  3. NASA technology program for future civil air transports

    NASA Technical Reports Server (NTRS)

    Wright, H. T.

    1983-01-01

    An assessment is undertaken of the development status of technology, applicable to future civil air transport design, which is currently undergoing conceptual study or testing at NASA facilities. The NASA civil air transport effort emphasizes advanced aerodynamic computational capabilities, fuel-efficient engines, advanced turboprops, composite primary structure materials, advanced aerodynamic concepts in boundary layer laminarization and aircraft configuration, refined control, guidance and flight management systems, and the integration of all these design elements into optimal systems. Attention is given to such novel transport aircraft design concepts as forward swept wings, twin fuselages, sandwich composite structures, and swept blade propfans.

  4. Executive control systems in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.; Pratt, T. W.

    1985-01-01

    Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.

  5. Medical mall founders' satisfaction and integrated management requirements.

    PubMed

    Ito, Atsushi

    2017-10-01

    Medical malls help provide integrated medical services and the effective and efficient independent management of multiple clinics, pharmacies and other medical facilities. Primary care in an aging society is a key issue worldwide and the establishment of a new model for primary care in Japanese medical malls is needed. Understanding the requirements of integrated management that contribute to the improvement of medical mall founders' satisfaction levels will help provide better services. We conducted a questionnaire survey targeting 1840 medical facilities nationwide; 351 facilities responded (19.1%). We performed comparative analyses on founders' satisfaction levels according to years in business, department/area, founder's relationship, decision-making system and presence/absence of liaison role. A total of 70% of medical malls in Japan have adjacent relationships with no liaison role in most cases; however, 60% of founders are satisfied. Integrated management requirements involve establishing the mall with peers from the same medical office unit or hospital, and establishing a system in which all founders can participate in decision-making (council system) or one where each general practitioner (GP) independently runs a clinic without communicating with others. The council system can ensure the capability of general practitioners to treat many primary care patients in the future. © 2016 The Authors. The International Journal of Health Planning and Management Published by John Wiley & Sons Ltd. © 2016 The Authors. The International Journal of Health Planning and Management Published by John Wiley & Sons Ltd.

  6. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  7. Facility Registry Service (FRS)

    EPA Pesticide Factsheets

    This is a centrally managed database that identifies facilities either subject to environmental regulations or of environmental interest, providing an integrated source of air, water, and waste environmental data.

  8. Partners | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Renewable Electricity to Grid Integration Evaluation of New Technology IGBT Industry Asetek High Performance Energy Commission High Performance Computing & Visualization Real-Time Data Collection for Institute/Schneider Electric Renewable Electricity to Grid Integration End-to-End Communication and Control

  9. Integrated scheduling and resource management. [for Space Station Information System

    NASA Technical Reports Server (NTRS)

    Ward, M. T.

    1987-01-01

    This paper examines the problem of integrated scheduling during the Space Station era. Scheduling for Space Station entails coordinating the support of many distributed users who are sharing common resources and pursuing individual and sometimes conflicting objectives. This paper compares the scheduling integration problems of current missions with those anticipated for the Space Station era. It examines the facilities and the proposed operations environment for Space Station. It concludes that the pattern of interdependecies among the users and facilities, which are the source of the integration problem is well structured, allowing a dividing of the larger problem into smaller problems. It proposes an architecture to support integrated scheduling by scheduling efficiently at local facilities as a function of dependencies with other facilities of the program. A prototype is described that is being developed to demonstrate this integration concept.

  10. Use of imagery and GIS for humanitarian demining management

    NASA Astrophysics Data System (ADS)

    Gentile, Jack; Gustafson, Glen C.; Kimsey, Mary; Kraenzle, Helmut; Wilson, James; Wright, Stephen

    1997-11-01

    In the Fall of 1996, the Center for Geographic Information Science at James Madison University became involved in a project for the Department of Defense evaluating the data needs and data management systems for humanitarian demining in the Third World. In particular, the effort focused on the information needs of demining in Cambodia and in Bosnia. In the first phase of the project one team attempted to identify all sources of unclassified country data, image data and map data. Parallel with this, another group collected information and evaluations on most of the commercial off-the-shelf computer software packages for the management of such geographic information. The result was a design for the kinds of data and the kinds of systems necessary to establish and maintain such a database as a humanitarian demining management tool. The second phase of the work involved acquiring the recommended data and systems, integrating the two, and producing a demonstration of the system. In general, the configuration involves ruggedized portable computers for field use with a greatly simplified graphical user interface, supported by a more capable central facility based on Pentium workstations and appropriate technical expertise.

  11. An SSH key management system: easing the pain of managing key/user/account associations

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Betts, W.; Lauret, J.; Shiryaev, A.

    2008-07-01

    Cyber security requirements for secure access to computing facilities often call for access controls via gatekeepers and the use of two-factor authentication. Using SSH keys to satisfy the two factor authentication requirement has introduced a potentially challenging task of managing the keys and their associations with individual users and user accounts. Approaches for a facility with the simple model of one remote user corresponding to one local user would not work at facilities that require a many-to-many mapping between users and accounts on multiple systems. We will present an SSH key management system we developed, tested and deployed to address the many-to-many dilemma in the environment of the STAR experiment. We will explain its use in an online computing context and explain how it makes possible the management and tracing of group account access spread over many sub-system components (data acquisition, slow controls, trigger, detector instrumentation, etc.) without the use of shared passwords for remote logins.

  12. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorensek, M.; Hamm, L.; Garcia, H.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come frommore » many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.« less

  13. 78 FR 32865 - Procedures To Establish Appropriate Minimum Block Sizes for Large Notional Off-Facility Swaps and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-31

    ... Financial Integrity of Markets c. Price Discovery d. Sound Risk Management Practices e. Other Public... Discovery d. Sound Risk Management Practices e. Other Public Interest Considerations E. Costs and Benefits..., Competitiveness and Financial Integrity c. Price Discovery d. Sound Risk Management Practices e. Other Public...

  14. Sustainability of integrated leprosy services in rural India: perceptions of community leaders in Uttar Pradesh.

    PubMed

    Raju, M S; Rao, P S S

    2011-01-01

    As part of a community-based action research to reduce leprosy stigma, village committees were formed in 3 hyper endemic states of India. From a total of 10 village committees with nearly 200 members from Uttar Pradesh, a systematic random sample of 69 men and 23 women were interviewed in-depth regarding their views on sustainability of integrated leprosy services, as currently adopted. Their recommendations were also sought for further enhancement. Percentages were computed and compared for statistical significance using the z-normal test. The findings show that less than 50% of the respondents were confident that the present trend in voluntary early reporting for MDT and management of complications was adequate to sustain the integrated leprosy services. There were no differences by men or women members and they felt that lack of proper facilities, training and orientation of staff are most influencing factors. Many suggestions were given for improving the sustainability.

  15. Korea's transition to the IPCC : Introduction of BAT-based Integrated ACT

    NASA Astrophysics Data System (ADS)

    Lee, Daegyun; Yoo, Heungmin; Kim, Younglan

    2017-04-01

    Recently, environmental pollution concerns have been increased in Korea more than ever before. So, The Ministry of Environment and the National Institute of Environmental Research(NIER) in Korea has forged a policy that can effectively reduce the environmental pollutants emitted from each business sectors. This policy nicknamed the "Integrated Environmental Management Act" will be implemented from January 2017. It is to consolidate the management method of each environment media (such as water/atmosphere, etc.) and discharge facility into single authorization and/or permission system for entire installation. In particular, it is the environmental management system in according to the "Act on Integrated Management of Environmental Pollution Facilities" that encourages active participation of companies, grant customized emission permits by considering the ambient environmental condition as well as best available techniques, and review the permitted items periodically. Throughout this optimal management policy, we expect the minimization of the environmental effect by reducing the production and emission of pollutants. The integrated environmental management system is a scientific and advanced whole new management system and it is also a policy that considers the environment and human health effect in a synthetically, and minimizes the emission of pollutants by applying the best available techniques. In this presentation, we will talk about the Korea's transition stage to IPCC(integrated pollution prevention and control) and introduce the whole new Integrated Environmental Management system of Korea.

  16. INTEGRITY - Integrated Human Exploration Mission Simulation Facility

    NASA Technical Reports Server (NTRS)

    Henninger, Donald L.

    2002-01-01

    It is proposed to develop a high-fidelity ground facility to carry out long-duration human exploration mission simulations. These would not be merely computer simulations - they would in fact comprise a series of actual missions that just happen to stay on earth. These missions would include all elements of an actual mission, using actual technologies that would be used for the real mission. These missions would also include such elements as extravehicular activities, robotic systems, telepresence and teleoperation, surface drilling technology-all using a simulated planetary landscape. A sequence of missions would be defined that get progressively longer and more robust, perhaps a series of five or six missions over a span of 10 to 15 years ranging in duration from 180 days up to 1000 days. This high-fidelity ground facility would operate hand-in-hand with a host of other terrestrial analog sites such as the Antarctic, Haughton Crater, and the Arizona desert. Of course, all of these analog mission simulations will be conducted here on earth in 1-g, and NASA will still need the Shuttle and ISS to carry out all the microgravity and hypogravity science experiments and technology validations. The proposed missions would have sufficient definition such that definitive requirements could be derived from them to serve as direction for all the program elements of the mission. Additionally, specific milestones would be established for the "launch" date of each mission so that R&D programs would have both good requirements and solid milestones from which to .build their implementation plans. Mission aspects that could not be directly incorporated into the ground facility would be simulated via software. New management techniques would be developed for evaluation in this ground test facility program. These new techniques would have embedded metrics which would allow them to be continuously evaluated and adjusted so that by the time the sequence of missions is completed, the best management techniques will have been developed, implemented, and validated. A trained cadre of managers experienced with a large, complex program would then be available.

  17. Energy consumption and load profiling at major airports. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.

    1998-12-01

    This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less

  18. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    DOE-funded research projects that are integrating cybersecurity controls with power systems principles Management, a hardware and software system that mimics the communications, power systems, and cybersecurity

  19. ICAT: Integrating data infrastructure for facilities based science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flannery, Damian; Matthews, Brian; Griffin, Tom

    2009-12-21

    ICAT: Integrating data infrastructure for facilities based science Damian Flannery, Brian Matthews, Tom Griffin, Juan Bicarregui, Michael Gleaves, Laurent Lerusse, Roger Downing, Alun Ashton, Shoaib Sufi, Glen Drinkwater, Kerstin Kleese Abstract— Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadatamore » model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.« less

  20. Numbers, systems, people: how interactions influence integration. Insights from case studies of HIV and reproductive health services delivery in Kenya

    PubMed Central

    Mayhew, Susannah H; Warren, Charlotte E; Collumbien, Martine; Ndwiga, Charity; Mutemwa, Richard; Lut, Irina; Colombini, Manuela; Vassall, Anna

    2017-01-01

    Abstract Drawing on rich data from the Integra evaluation of integrated HIV and reproductive-health services, we explored the interaction of systems hardware and software factors to explain why some facilities were able to implement and sustain integrated service delivery while others were not. This article draws on detailed mixed-methods data for four case-study facilities offering reproductive-health and HIV services between 2009 and 2013 in Kenya: (i) time-series client flow, tracking service uptake for 8841 clients; (ii) structured questionnaires with 24 providers; (iii) in-depth interviews with 17 providers; (iv) workload and facility data using a periodic activity review and cost-instruments; and (v) contextual data on external activities related to integration in study sites. Overall, our findings suggested that although structural factors like stock-outs, distribution of staffing and workload, rotation of staff can affect how integrated care is provided, all these factors can be influenced by staff themselves: both frontline and management. Facilities where staff displayed agency of decision making, worked as a team to share workload and had management that supported this, showed better integration delivery and staff were able to overcome some structural deficiencies to enable integrated care. Poor-performing facilities had good structural integration, but staff were unable to utilize this because they were poorly organized, unsupported or teams were dysfunctional. Conscientious objection and moralistic attitudes were also barriers. Integra has demonstrated that structural integration is not sufficient for integrated service delivery. Rather, our case studies show that in some cases excellent leadership and peer-teamwork enabled facilities to perform well despite resource shortages. The ability to provide support for staff to work flexibly to deliver integrated services and build resilient health systems to meet changing needs is particularly relevant as health systems face challenges of changing burdens of disease, climate change, epidemic outbreaks and more. PMID:29194544

  1. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less

  2. Robust telerobotics - an integrated system for waste handling, characterization and sorting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couture, S.A.; Hurd, R.L.; Wilhelmsen, K.C.

    The Mixed Waste Management Facility (MWMF) at the Lawrence Livermore National Laboratory was designed to serve as a national testbed to demonstrate integrated technologies for the treatment of low-level organic mixed waste at a pilot-plant scale. Pilot-scale demonstration serves to bridge the gap between mature, bench-scale proven technologies and full-scale treatment facilities by providing the infrastructure needed to evaluate technologies in an integrated, front-end to back-end facility. Consistent with the intent to focus on technologies that are ready for pilot scale deployment, the front-end handling and feed preparation of incoming waste material has been designed to demonstrate the application ofmore » emerging robotic and remotely operated handling systems. The selection of telerobotics for remote handling in MWMF was made based on a number of factors - personnel protection, waste generation, maturity, cost, flexibility and extendibility. Telerobotics, or shared control of a manipulator by an operator and a computer, provides the flexibility needed to vary the amount of automation or operator intervention according to task complexity. As part of the telerobotics design effort, the technical risk of deploying the technology was reduced through focused developments and demonstrations. The work involved integrating key tools (1) to make a robust telerobotic system that operates at speeds and reliability levels acceptable to waste handling operators and, (2) to demonstrate an efficient operator interface that minimizes the amount of special training and skills needed by the operator. This paper describes the design and operation of the prototype telerobotic waste handling and sorting system that was developed for MWMF.« less

  3. Toward the Factory of the Future.

    ERIC Educational Resources Information Center

    Hazony, Yehonathan

    1983-01-01

    Computer-integrated manufacturing (CIM) involves use of data processing technology as the vehicle for full integration of the total manufacturing process. A prototype research and educational facility for CIM developed with industrial sponsorship at Princeton University is described. (JN)

  4. User Facilities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Medical mall founders' satisfaction and integrated management requirements

    PubMed Central

    2016-01-01

    Summary Medical malls help provide integrated medical services and the effective and efficient independent management of multiple clinics, pharmacies and other medical facilities. Primary care in an aging society is a key issue worldwide and the establishment of a new model for primary care in Japanese medical malls is needed. Understanding the requirements of integrated management that contribute to the improvement of medical mall founders' satisfaction levels will help provide better services. We conducted a questionnaire survey targeting 1840 medical facilities nationwide; 351 facilities responded (19.1%). We performed comparative analyses on founders' satisfaction levels according to years in business, department/area, founder's relationship, decision‐making system and presence/absence of liaison role. A total of 70% of medical malls in Japan have adjacent relationships with no liaison role in most cases; however, 60% of founders are satisfied. Integrated management requirements involve establishing the mall with peers from the same medical office unit or hospital, and establishing a system in which all founders can participate in decision‐making (council system) or one where each general practitioner (GP) independently runs a clinic without communicating with others. The council system can ensure the capability of general practitioners to treat many primary care patients in the future. © 2016 The Authors. The International Journal of Health Planning and Management Published by John Wiley & Sons Ltd PMID:27218206

  6. "It's very complicated": a qualitative study of medicines management in intermediate care facilities in Northern Ireland.

    PubMed

    Millar, Anna N; Hughes, Carmel M; Ryan, Cristín

    2015-06-02

    Intermediate care (IC) describes a range of services targeted at older people, aimed at preventing unnecessary hospitalisation, promoting faster recovery from illness and maximising independence. Older people are at increased risk of medication-related adverse events, but little is known about the provision of medicines management services in IC facilities. This study aimed to describe the current provision of medicines management services in IC facilities in Northern Ireland (NI) and to explore healthcare workers' (HCWs) and patients' views of, and attitudes towards these services and the IC concept. Semi-structured interviews were conducted, recorded, transcribed verbatim and analysed using a constant comparative approach with HCWs and patients from IC facilities in NI. Interviews were conducted with 25 HCWs and 18 patients from 12 IC facilities in NI. Three themes were identified: 'concept and reality', 'setting and supply' and 'responsibility and review'. A mismatch between the concept of IC and the reality was evident. The IC facility setting dictated prescribing responsibilities and the supply of medicines, presenting challenges for HCWs. A lack of a standardised approach to responsibility for the provision of medicines management services including clinical review was identified. Whilst pharmacists were not considered part of the multidisciplinary team, most HCWs recognised a need for their input. Medicines management was not a concern for the majority of IC patients. Medicines management services are not integral to IC and medicine-related challenges are frequently encountered. Integration of pharmacists into the multidisciplinary team could potentially improve medicines management in IC.

  7. Development of an integrated medical supply information system

    NASA Astrophysics Data System (ADS)

    Xu, Eric; Wermus, Marek; Blythe Bauman, Deborah

    2011-08-01

    The integrated medical supply inventory control system introduced in this study is a hybrid system that is shaped by the nature of medical supply, usage and storage capacity limitations of health care facilities. The system links demand, service provided at the clinic, health care service provider's information, inventory storage data and decision support tools into an integrated information system. ABC analysis method, economic order quantity model, two-bin method and safety stock concept are applied as decision support models to tackle inventory management issues at health care facilities. In the decision support module, each medical item and storage location has been scrutinised to determine the best-fit inventory control policy. The pilot case study demonstrates that the integrated medical supply information system holds several advantages for inventory managers, since it entails benefits of deploying enterprise information systems to manage medical supply and better patient services.

  8. NIST: Information Management in the AMRF

    NASA Technical Reports Server (NTRS)

    Callaghan, George (Editor)

    1991-01-01

    The information management strategies developed for the NIST Automated Manufacturing Research Facility (AMRF) - a prototype small batch manufacturing facility used for integration and measurement related standards research are outlined in this video. The five major manufacturing functions - design, process planning, off-line programming, shop floor control, and materials processing are explained and their applications demonstrated.

  9. Scope of Work for Integration Management and Installation Services of the National Ignition Facility Beampath Infrastructure System

    NASA Astrophysics Data System (ADS)

    Coyle, P. D.

    2000-03-01

    The goal of the National Ignition Facility (NIF) project is to provide an above ground experimental capability for maintaining nuclear competence and weapons effects simulation and to provide a facility capable of achieving fusion ignition using solid-state lasers as the energy driver. The facility will incorporate 192 laser beams, which will be focused onto a small target located at the center of a spherical target chamber-the energy from the laser beams will be deposited in a few billionths of a second. The target will then implode, forcing atomic nuclei to sufficiently high temperatures and densities necessary to achieve a miniature fusion reaction. The NIF is under construction, at Livermore, California, located approximately 50 miles southeast of San Francisco, California. The University of California, Lawrence Livermore National Laboratory (LLNL), operating under Prime Contract W-7405-ENG. 48 with the U.S. Department of Energy (DOE), shall subcontract for Integration Management and Installation (IMI) Services for the Beampath Infrastructure System (BIS). The BIS includes Beampath Hardware and Beampath Utilities. Conventional Facilities work for the NIF Laser and Target Area Building (LTAB) and Optics Assembly Building (OAB) is over 86 percent constructed. This Scope of Work is for Integration Management and Installation (IMI) Services corresponding to Management Services, Design Integration Services, Construction Services, and Commissioning Services for the NIB BIS. The BIS includes Beampath Hardware and Beampath Utilities. Beampath Hardware and Beampath Utilities include beampath vessels, enclosures, and beam tubes; auxiliary and utility systems; and support structures. A substantial amount of GFE will be provided by the University for installation as part of the infrastructure packages.

  10. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  11. Energy Systems Integration Facility (ESIF): Golden, CO - Energy Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheppy, Michael; VanGeet, Otto; Pless, Shanti

    2015-03-01

    At NREL's Energy Systems Integration Facility (ESIF) in Golden, Colo., scientists and engineers work to overcome challenges related to how the nation generates, delivers and uses energy by modernizing the interplay between energy sources, infrastructure, and data. Test facilities include a megawatt-scale ac electric grid, photovoltaic simulators and a load bank. Additionally, a high performance computing data center (HPCDC) is dedicated to advancing renewable energy and energy efficient technologies. A key design strategy is to use waste heat from the HPCDC to heat parts of the building. The ESIF boasts an annual EUI of 168.3 kBtu/ft2. This article describes themore » building's procurement, design and first year of performance.« less

  12. Integrated Work Management: FOD/RLM, Course 31882

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Lewis Edward

    The facility operations director (FOD) and responsible line manager (RLM) play leadership and functional roles in the integrated work management (IWM) process at Los Alamos National Laboratory (LANL). This course, Integrated Work Management: FOD/RLM (COURSE 31882), describes the IWM roles and responsibilities of the FOD and the RLM; it also discusses IWM requirements that must be met by the FOD and the RLM. Before taking this course, you may want to take COURSE 31881, Integrated Work Management: Overview. This self-study course would be particularly helpful if you are unfamiliar with the IWM process. You should also read Procedure (P) 300,more » Integrated Work Management. This course briefly covers the roles of the preparer and person in charge (PIC). For more in-depth instruction on the preparer’s role, see COURSE 31883, Integrated Work Management: Preparer. For instruction on the PIC’s role, see COURSE 31884, Integrated Work Management: PIC.« less

  13. Los Alamos Science Facilities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. Improving heart failure disease management in skilled nursing facilities: lessons learned.

    PubMed

    Dolansky, Mary A; Hitch, Jeanne A; Piña, Ileana L; Boxer, Rebecca S

    2013-11-01

    The purpose of the study was to design and evaluate an improvement project that implemented HF management in four skilled nursing facilities (SNFs). Kotter's Change Management principles were used to guide the implementation. In addition, half of the facilities had an implementation coach who met with facility staff weekly for 4 months and monthly for 5 months. Weekly and monthly audits were performed that documented compliance with eight key aspects of the protocol. Contextual factors were captured using field notes. Adherence to the HF management protocols was variable ranging from 17% to 82%. Facilitators of implementation included staff who championed the project, an implementation coach, and physician involvement. Barriers were high staff turnover and a hierarchal culture. Opportunities exist to integrate HF management protocols to improve SNF care.

  15. G189A analytical simulation of the RITE Integrated Waste Management-Water System

    NASA Technical Reports Server (NTRS)

    Coggi, J. V.; Clonts, S. E.

    1974-01-01

    This paper discusses the computer simulation of the Integrated Waste Management-Water System Using Radioisotopes for Thermal Energy (RITE) and applications of the simulation. Variations in the system temperature and flows due to particular operating conditions and variations in equipment heating loads imposed on the system were investigated with the computer program. The results were assessed from the standpoint of the computed dynamic characteristics of the system and the potential applications of the simulation to system development and vehicle integration.

  16. DOE Network 2025: Network Research Problems and Challenges for DOE Scientists. Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2016-02-01

    The growing investments in large science instruments and supercomputers by the US Department of Energy (DOE) hold enormous promise for accelerating the scientific discovery process. They facilitate unprecedented collaborations of geographically dispersed teams of scientists that use these resources. These collaborations critically depend on the production, sharing, moving, and management of, as well as interactive access to, large, complex data sets at sites dispersed across the country and around the globe. In particular, they call for significant enhancements in network capacities to sustain large data volumes and, equally important, the capabilities to collaboratively access the data across computing, storage, andmore » instrument facilities by science users and automated scripts and systems. Improvements in network backbone capacities of several orders of magnitude are essential to meet these challenges, in particular, to support exascale initiatives. Yet, raw network speed represents only a part of the solution. Indeed, the speed must be matched by network and transport layer protocols and higher layer tools that scale in ways that aggregate, compose, and integrate the disparate subsystems into a complete science ecosystem. Just as important, agile monitoring and management services need to be developed to operate the network at peak performance levels. Finally, these solutions must be made an integral part of the production facilities by using sound approaches to develop, deploy, diagnose, operate, and maintain them over the science infrastructure.« less

  17. Supporting NASA Facilities Through GIS

    NASA Technical Reports Server (NTRS)

    Ingham, Mary E.

    2000-01-01

    The NASA GIS Team supports NASA facilities and partners in the analysis of spatial data. Geographic Information System (G[S) is an integration of computer hardware, software, and personnel linking topographic, demographic, utility, facility, image, and other geo-referenced data. The system provides a graphic interface to relational databases and supports decision making processes such as planning, design, maintenance and repair, and emergency response.

  18. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  19. NREL, American Vanadium Demonstrate First-of-Its-Kind Battery Management

    Science.gov Websites

    System | Energy Systems Integration Facility | NREL American Vanadium NREL, American Vanadium Demonstrate First-of-Its-Kind Battery Management System NREL researchers are collaborating with American Vanadium, an integrated energy storage company, to evaluate and demonstrate the first North American

  20. A Practical Guide to Management of Common Pests in Schools. Integrated Pest Management.

    ERIC Educational Resources Information Center

    Illinois State Dept. of Public Health, Springfield.

    This 3-part manual is designed to assist school officials understand the principles of Integrated Pest Management and aid them in implementing those principles into a comprehensive pest control program in their facilities. Developed for Illinois, this guide can be applied in part or in total to other areas of the country. Part 1 explains what an…

  1. Development and Evaluation of an Integrated Pest Management Toolkit for Child Care Providers

    ERIC Educational Resources Information Center

    Alkon, Abbey; Kalmar, Evie; Leonard, Victoria; Flint, Mary Louise; Kuo, Devina; Davidson, Nita; Bradman, Asa

    2012-01-01

    Young children and early care and education (ECE) staff are exposed to pesticides used to manage pests in ECE facilities in the United States and elsewhere. The objective of this pilot study was to encourage child care programs to reduce pesticide use and child exposures by developing and evaluating an Integrated Pest Management (IPM) Toolkit for…

  2. Development of the advanced life support Systems Integration Research Facility at NASA's Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Tri, Terry O.; Thompson, Clifford D.

    1992-01-01

    Future NASA manned missions to the moon and Mars will require development of robust regenerative life support system technologies which offer high reliability and minimal resupply. To support the development of such systems, early ground-based test facilities will be required to demonstrate integrated, long-duration performance of candidate regenerative air revitalization, water recovery, and thermal management systems. The advanced life support Systems Integration Research Facility (SIRF) is one such test facility currently being developed at NASA's Johnson Space Center. The SIRF, when completed, will accommodate unmanned and subsequently manned integrated testing of advanced regenerative life support technologies at ambient and reduced atmospheric pressures. This paper provides an overview of the SIRF project, a top-level description of test facilities to support the project, conceptual illustrations of integrated test article configurations for each of the three SIRF systems, and a phased project schedule denoting projected activities and milestones through the next several years.

  3. Integrated System Test of the Advanced Instructional System (AIS). Final Report.

    ERIC Educational Resources Information Center

    Lintz, Larry M.; And Others

    The integrated system test for the Advanced Instructional System (AIS) was designed to provide quantitative information regarding training time reductions resulting from certain computer managed instruction features. The reliabilities of these features and of support systems were also investigated. Basic computer managed instruction reduced…

  4. Teaching ergonomics to nursing facility managers using computer-based instruction.

    PubMed

    Harrington, Susan S; Walker, Bonnie L

    2006-01-01

    This study offers evidence that computer-based training is an effective tool for teaching nursing facility managers about ergonomics and increasing their awareness of potential problems. Study participants (N = 45) were randomly assigned into a treatment or control group. The treatment group completed the ergonomics training and a pre- and posttest. The control group completed the pre- and posttests without training. Treatment group participants improved significantly from 67% on the pretest to 91% on the posttest, a gain of 24%. Differences between mean scores for the control group were not significant for the total score or for any of the subtests.

  5. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  6. A context-adaptable approach to clinical guidelines.

    PubMed

    Terenziani, Paolo; Montani, Stefania; Bottrighi, Alessio; Torchio, Mauro; Molino, Gianpaolo; Correndo, Gianluca

    2004-01-01

    One of the most relevant obstacles to the use and dissemination of clinical guidelines is the gap between the generality of guidelines (as defined, e.g., by physicians' committees) and the peculiarities of the specific context of application. In particular, general guidelines do not take into account the fact that the tools needed for laboratory and instrumental investigations might be unavailable at a given hospital. Moreover, computer-based guideline managers must also be integrated with the Hospital Information System (HIS), and usually different DBMS are adopted by different hospitals. The GLARE (Guideline Acquisition, Representation and Execution) system addresses these issues by providing a facility for automatic resource-based adaptation of guidelines to the specific context of application, and by providing a modular architecture in which only limited and well-localised changes are needed to integrate the system with the HIS at hand.

  7. The development of an automated flight test management system for flight test planning and monitoring

    NASA Technical Reports Server (NTRS)

    Hewett, Marle D.; Tartt, David M.; Duke, Eugene L.; Antoniewicz, Robert F.; Brumbaugh, Randal W.

    1988-01-01

    The development of an automated flight test management system (ATMS) as a component of a rapid-prototyping flight research facility for AI-based flight systems concepts is described. The rapid-prototyping facility includes real-time high-fidelity simulators, numeric and symbolic processors, and high-performance research aircraft modified to accept commands for a ground-based remotely augmented vehicle facility. The flight system configuration of the ATMS includes three computers: the TI explorer LX and two GOULD SEL 32/27s.

  8. A Statewide Management Information System for the Control of Sexually Transmitted Diseases

    PubMed Central

    Fichtner, Ronald R.; Blount, Joseph H.; Spencer, Jack N.

    1983-01-01

    The persistent endemicity in the U.S. of infectious syphilis and gonorrhea, together with increasing diagnoses of gonococcal-related pelvic inflammatory disease in women and genital herpes infections, have intensified pressures on state and local VD control programs to measure, analyze, and interpret the distribution and transmission of these and other sexually transmitted diseases. In response, the Division of Venereal Disease Control (DVDC) of the Centers for Disease Control (CDC) is participating in the development of three state-wide, prototype sexually transmitted disease (STD) management information systems. A systems analysis of a typical state-wide STD control program indicated that timely, comprehensive, informational support to public health managers and policy makers should be combined with rapid, direct support of program activities using an on-line, integrated data base, computer system with telecommunications capability. This methodology uses a data base management system, query facility for ad hoc inquiries, custom design philosophies, but utilizes distinct hardware and software implementations.

  9. Client Functional Assessment Data as Management Information: Woodrow Wilson Rehabilitation Center's Management Information System

    PubMed Central

    Steidle, Ernest F.

    1983-01-01

    This paper describes the design of a functional assessment system, a component of a management information system (MIS) that supports a comprehensive rehabilitation facility. Products of the subsystem document the functional status of rehabilitation clients through process evaluation reporting and outcomes reporting. The purpose of this paper is to describe the design of this MIS component. The environment supported, the integration requirements and the needed development approach is unique, requiring significant input from health care professionals, medical informatics specialists, statisticians and program evaluators. Strategies for the implementation of the functional assessment system are the major results reported in this paper. They are most useful to the systems designer or management engineer in a human service delivery setting. MIS plan development, computer file structure and access methods, and approaches to scheduling applications is described. Finally, the development of functional status measures is discussed. Application of the methodologies described will facilitate similar efforts towards systems development in other human service delivery settings.

  10. Sharing Responsibility for Data Stewardship Between Scientists and Curators

    NASA Astrophysics Data System (ADS)

    Hedstrom, M. L.

    2012-12-01

    Data stewardship is becoming increasingly important to support accurate conclusions from new forms of data, integration of and computation across heterogeneous data types, interactions between models and data, replication of results, data governance and long-term archiving. In addition to increasing recognition of the importance of data management, data science, and data curation by US and international scientific agencies, the National Academies of Science Board on Research Data and Information is sponsoring a study on Data Curation Education and Workforce Issues. Effective data stewardship requires a distributed effort among scientists who produce data, IT staff and/or vendors who provide data storage and computational facilities and services, and curators who enhance data quality, manage data governance, provide access to third parties, and assume responsibility for long-term archiving of data. The expertise necessary for scientific data management includes a mix of knowledge of the scientific domain; an understanding of domain data requirements, standards, ontologies and analytical methods; facility with leading edge information technology; and knowledge of data governance, standards, and best practices for long-term preservation and access that rarely are found in a single individual. Rather than developing data science and data curation as new and distinct occupations, this paper examines the set of tasks required for data stewardship. The paper proposes an alternative model that embeds data stewardship in scientific workflows and coordinates hand-offs between instruments, repositories, analytical processing, publishers, distributors, and archives. This model forms the basis for defining knowledge and skill requirements for specific actors in the processes required for data stewardship and the corresponding educational and training needs.

  11. Information Technology: Making It All Fit. Track II: Managing Technologies Integration.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Nine papers from the 1988 CAUSE conference's Track II, Managing Technologies Integration, are presented. They include: "Computing in the '90s--Will We Be Ready for the Applications Needed?" (Stephen Patrick); "Glasnost, The Era of 'Openness'" (Bernard W. Gleason); "Academic and Administrative Computing: Are They Really…

  12. A Performance Measurement and Implementation Methodology in a Department of Defense CIM (Computer Integrated Manufacturing) Environment

    DTIC Science & Technology

    1988-01-24

    vanes.-The new facility is currently being called the Engine Blade/ Vape Facility (EB/VF). There are three primary goals in automating this proc..e...earlier, the search led primarily into the areas of CIM Justification, Automation Strategies , Performance Measurement, and Integration issues. Of...of living, has been steadily eroding. One dangerous trend that has developed in keenly competitive world markets , says Rohan [33], has been for U.S

  13. EMAAS: An extensible grid-based Rich Internet Application for microarray data analysis and management

    PubMed Central

    Barton, G; Abbott, J; Chiba, N; Huang, DW; Huang, Y; Krznaric, M; Mack-Smith, J; Saleem, A; Sherman, BT; Tiwari, B; Tomlinson, C; Aitman, T; Darlington, J; Game, L; Sternberg, MJE; Butcher, SA

    2008-01-01

    Background Microarray experimentation requires the application of complex analysis methods as well as the use of non-trivial computer technologies to manage the resultant large data sets. This, together with the proliferation of tools and techniques for microarray data analysis, makes it very challenging for a laboratory scientist to keep up-to-date with the latest developments in this field. Our aim was to develop a distributed e-support system for microarray data analysis and management. Results EMAAS (Extensible MicroArray Analysis System) is a multi-user rich internet application (RIA) providing simple, robust access to up-to-date resources for microarray data storage and analysis, combined with integrated tools to optimise real time user support and training. The system leverages the power of distributed computing to perform microarray analyses, and provides seamless access to resources located at various remote facilities. The EMAAS framework allows users to import microarray data from several sources to an underlying database, to pre-process, quality assess and analyse the data, to perform functional analyses, and to track data analysis steps, all through a single easy to use web portal. This interface offers distance support to users both in the form of video tutorials and via live screen feeds using the web conferencing tool EVO. A number of analysis packages, including R-Bioconductor and Affymetrix Power Tools have been integrated on the server side and are available programmatically through the Postgres-PLR library or on grid compute clusters. Integrated distributed resources include the functional annotation tool DAVID, GeneCards and the microarray data repositories GEO, CELSIUS and MiMiR. EMAAS currently supports analysis of Affymetrix 3' and Exon expression arrays, and the system is extensible to cater for other microarray and transcriptomic platforms. Conclusion EMAAS enables users to track and perform microarray data management and analysis tasks through a single easy-to-use web application. The system architecture is flexible and scalable to allow new array types, analysis algorithms and tools to be added with relative ease and to cope with large increases in data volume. PMID:19032776

  14. Leveraging Safety Programs to Improve and Support Security Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leach, Janice; Snell, Mark K.; Pratt, R.

    2015-10-01

    There has been a long history of considering Safety, Security, and Safeguards (3S) as three functions of nuclear security design and operations that need to be properly and collectively integrated with operations. This paper specifically considers how safety programmes can be extended directly to benefit security as part of an integrated facility management programme. The discussion will draw on experiences implementing such a programme at Sandia National Laboratories’ Annular Research Reactor Facility. While the paper focuses on nuclear facilities, similar ideas could be used to support security programmes at other types of high-consequence facilities and transportation activities.

  15. Waste receiving and processing facility module 1 data management system software project management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, R.E.

    1994-11-02

    This document provides the software development plan for the Waste Receiving and Processing (WRAP) Module 1 Data Management System (DMS). The DMS is one of the plant computer systems for the new WRAP 1 facility (Project W-026). The DMS will collect, store, and report data required to certify the low level waste (LLW) and transuranic (TRU) waste items processed at WRAP 1 as acceptable for shipment, storage, or disposal.

  16. Study on Integrated Pest Management for Libraries and Archives.

    ERIC Educational Resources Information Center

    Parker, Thomas A.

    This study addresses the problems caused by the major insect and rodent pests and molds and mildews in libraries and archives; the damage they do to collections; and techniques for their prevention and control. Guidelines are also provided for the development and initiation of an Integrated Pest Management program for facilities housing library…

  17. Strategies for healthcare facilities, construction, and real estate management.

    PubMed

    Lee, James G

    2012-05-01

    Adventist HealthCare offers the following lessons learned in improving the value of healthcare facilities, construction, and real estate management: Use an integrated approach. Ensure that the objectives of the approach align the hospital or health system's mission and values. Embrace innovation. Develop a plan that applies to the whole organization, rather than specific business units. Ensure commitment of senior leaders.

  18. Guidelines for Management Information Systems in Canadian Health Care Facilities

    PubMed Central

    Thompson, Larry E.

    1987-01-01

    The MIS Guidelines are a comprehensive set of standards for health care facilities for the recording of staffing, financial, workload, patient care and other management information. The Guidelines enable health care facilities to develop management information systems which identify resources, costs and products to more effectively forecast and control costs and utilize resources to their maximum potential as well as provide improved comparability of operations. The MIS Guidelines were produced by the Management Information Systems (MIS) Project, a cooperative effort of the federal and provincial governments, provincial hospital/health associations, under the authority of the Canadian Federal/Provincial Advisory Committee on Institutional and Medical Services. The Guidelines are currently being implemented on a “test” basis in ten health care facilities across Canada and portions integrated in government reporting as finalized.

  19. Final Environmental Assessment Addressing Implementation of the Integrated Natural Resources Management Plan for Kirtland Air Force Base

    DTIC Science & Technology

    2014-09-01

    square-foot facility to house the newly formed 498th Nuclear Systems Wing. This facility would be a two-story, steel -framed structure with...proposes to construct a 15,946-square-foot sustainment center for the Nuclear Weapons Center. This facility would be a two-story, steel -framed structure...Bob Estes Cc: Valerie Renner Cultural Resource Manager 2050 Wyoming Blvd. SE Kirtland AFB, NM 87117 B-7 Native American Tribes – IICEP

  20. Environmental Assessment for the NASA First Response Facility

    NASA Technical Reports Server (NTRS)

    Kennedy, Carolyn

    2003-01-01

    NASA intends to construct a First Response Facility for integrated emergency response and health management. This facility will consolidate the Stennis Space Center fire department, medical clinic, security operations, emergency operations and the energy management and control center. The alternative considered is the "No Action Alternative". The proposed action will correct existing operational weaknesses and enhance capabilities to respond to medical emergencies and mitigate any other possible threats. Environmental impacts include are emissions, wetlands disturbance, solid waste generation, and storm water control.

  1. Application of Framework for Integrating Safety, Security and Safeguards (3Ss) into the Design Of Used Nuclear Fuel Storage Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badwan, Faris M.; Demuth, Scott F

    Department of Energy’s Office of Nuclear Energy, Fuel Cycle Research and Development develops options to the current commercial fuel cycle management strategy to enable the safe, secure, economic, and sustainable expansion of nuclear energy while minimizing proliferation risks by conducting research and development focused on used nuclear fuel recycling and waste management to meet U.S. needs. Used nuclear fuel is currently stored onsite in either wet pools or in dry storage systems, with disposal envisioned in interim storage facility and, ultimately, in a deep-mined geologic repository. The safe management and disposition of used nuclear fuel and/or nuclear waste is amore » fundamental aspect of any nuclear fuel cycle. Integrating safety, security, and safeguards (3Ss) fully in the early stages of the design process for a new nuclear facility has the potential to effectively minimize safety, proliferation, and security risks. The 3Ss integration framework could become the new national and international norm and the standard process for designing future nuclear facilities. The purpose of this report is to develop a framework for integrating the safety, security and safeguards concept into the design of Used Nuclear Fuel Storage Facility (UNFSF). The primary focus is on integration of safeguards and security into the UNFSF based on the existing Nuclear Regulatory Commission (NRC) approach to addressing the safety/security interface (10 CFR 73.58 and Regulatory Guide 5.73) for nuclear power plants. The methodology used for adaptation of the NRC safety/security interface will be used as the basis for development of the safeguards /security interface and later will be used as the basis for development of safety and safeguards interface. Then this will complete the integration cycle of safety, security, and safeguards. The overall methodology for integration of 3Ss will be proposed, but only the integration of safeguards and security will be applied to the design of the UNFSF. The framework for integration of safeguards and security into the UNFSF will include 1) identification of applicable regulatory requirements, 2) selection of a common system that share dual safeguard and security functions, 3) development of functional design criteria and design requirements for the selected system, 4) identification and integration of the dual safeguards and security design requirements, and 5) assessment of the integration and potential benefit.« less

  2. Numbers, systems, people: how interactions influence integration. Insights from case studies of HIV and reproductive health services delivery in Kenya.

    PubMed

    Mayhew, Susannah H; Sweeney, Sedona; Warren, Charlotte E; Collumbien, Martine; Ndwiga, Charity; Mutemwa, Richard; Lut, Irina; Colombini, Manuela; Vassall, Anna

    2017-11-01

    Drawing on rich data from the Integra evaluation of integrated HIV and reproductive-health services, we explored the interaction of systems hardware and software factors to explain why some facilities were able to implement and sustain integrated service delivery while others were not. This article draws on detailed mixed-methods data for four case-study facilities offering reproductive-health and HIV services between 2009 and 2013 in Kenya: (i) time-series client flow, tracking service uptake for 8841 clients; (ii) structured questionnaires with 24 providers; (iii) in-depth interviews with 17 providers; (iv) workload and facility data using a periodic activity review and cost-instruments; and (v) contextual data on external activities related to integration in study sites. Overall, our findings suggested that although structural factors like stock-outs, distribution of staffing and workload, rotation of staff can affect how integrated care is provided, all these factors can be influenced by staff themselves: both frontline and management. Facilities where staff displayed agency of decision making, worked as a team to share workload and had management that supported this, showed better integration delivery and staff were able to overcome some structural deficiencies to enable integrated care. Poor-performing facilities had good structural integration, but staff were unable to utilize this because they were poorly organized, unsupported or teams were dysfunctional. Conscientious objection and moralistic attitudes were also barriers.Integra has demonstrated that structural integration is not sufficient for integrated service delivery. Rather, our case studies show that in some cases excellent leadership and peer-teamwork enabled facilities to perform well despite resource shortages. The ability to provide support for staff to work flexibly to deliver integrated services and build resilient health systems to meet changing needs is particularly relevant as health systems face challenges of changing burdens of disease, climate change, epidemic outbreaks and more. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  3. XML Based Scientific Data Management Facility

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zubair, M.; Ziebartt, John (Technical Monitor)

    2001-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of HTML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  4. XML Based Scientific Data Management Facility

    NASA Technical Reports Server (NTRS)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  5. ASC FY17 Implementation Plan, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, P. G.

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resources, including technical staff, hardware, simulation software, and computer science solutions.« less

  6. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  7. Automated Management Of Documents

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1995-01-01

    Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.

  8. Waste Management Project fiscal year 1998 multi-year work plan, WBS 1.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobsen, P.H.

    The Waste Management Project manages and integrates (non-TWRS) waste management activities at the site. Activities include management of Hanford wastes as well as waste transferred to Hanford from other DOE, Department of Defense, or other facilities. This work includes handling, treatment, storage, and disposal of radioactive, nonradioactive, hazardous, and mixed solid and liquid wastes. Major Waste Management Projects are the Solid Waste Project, Liquid Effluents Project, and Analytical Services. Existing facilities (e.g., grout vaults and canyons) shall be evaluated for reuse for these purposes to the maximum extent possible.

  9. Laboratories | Energy Systems Integration Facility | NREL

    Science.gov Websites

    laboratories to be safely divided into multiple test stand locations (or "capability hubs") to enable Fabrication Laboratory Energy Systems High-Pressure Test Laboratory Energy Systems Integration Laboratory Energy Systems Sensor Laboratory Fuel Cell Development and Test Laboratory High-Performance Computing

  10. Rapid Prototyping of Computer-Based Presentations Using NEAT, Version 1.1.

    ERIC Educational Resources Information Center

    Muldner, Tomasz

    NEAT (iNtegrated Environment for Authoring in ToolBook) provides templates and various facilities for the rapid prototyping of computer-based presentations, a capability that is lacking in current authoring systems. NEAT is a specialized authoring system that can be used by authors who have a limited knowledge of computer systems and no…

  11. Lean coding machine. Facilities target productivity and job satisfaction with coding automation.

    PubMed

    Rollins, Genna

    2010-07-01

    Facilities are turning to coding automation to help manage the volume of electronic documentation, streamlining workflow, boosting productivity, and increasing job satisfaction. As EHR adoption increases, computer-assisted coding may become a necessity, not an option.

  12. Exploratory study on potential safeguards applications for shared ledger technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frazar, Sarah L.; Jarman, Kenneth D.; Joslyn, Cliff A.

    The International Atomic Energy Agency (IAEA) is responsible for providing credible assurance that countries are meeting their obligations not to divert or misuse nuclear materials and facilities for non-peaceful purposes. To this end, the IAEA integrates information about States’ nuclear material inventories and transactions with other types of data to draw its safeguards conclusions. As the amount and variety of data and information has increased, the IAEA’s data acquisition, management, and analysis processes have greatly benefited from advancements in computer science, data management, and cybersecurity during the last 20 years. Despite these advancements, inconsistent use of advanced computer technologies asmore » well as political concerns among certain IAEA Member States centered on trust, transparency, and IAEA authorities limit the overall effectiveness and efficiency of IAEA safeguards. As a result, there is an ongoing need to strengthen the effectiveness and efficiency of IAEA safeguards while improving Member State cooperation and trust in the safeguards system. These chronic safeguards needs could be met with some emerging technologies, specifically those associated with the digital currency bitcoin.« less

  13. Control and Information Systems for the National Ignition Facility

    DOE PAGES

    Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...

    2017-03-23

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  14. Control and Information Systems for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Gordon; Casey, Allan; Christensen, Marvin

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  15. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  16. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  17. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  18. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  19. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  20. Heavy-Duty Vehicle Thermal Management | Transportation Research | NREL

    Science.gov Websites

    Heavy-Duty Vehicle Thermal Management Heavy-Duty Vehicle Thermal Management Infrared image of a and meet more stringent idling regulations. NREL's HDV thermal management program, CoolCab, focuses on thermal management technologies undergo assessment at NREL's Vehicle Testing and Integration Facility test

  1. Control Strategies for Corridor Management

    DOT National Transportation Integrated Search

    2016-06-28

    Integrated management of travel corridors comprising of freeways and adjacent arterial streets can potentially improve the performance of the highway facilities. However, several research gaps exist in data collection and performance measurement, ana...

  2. Faster response time, effective use of resources : integrating transportation systems and emergency management systems.

    DOT National Transportation Integrated Search

    1999-01-01

    When emergency services agencies share : facilities and traffic monitoring resources : with transportation management agencies, : the efficiency and speed of incident : response are measurably improved.

  3. An Integrated Ensemble-Based Operational Framework to Predict Urban Flooding: A Case Study of Hurricane Sandy in the Passaic and Hackensack River Basins

    NASA Astrophysics Data System (ADS)

    Saleh, F.; Ramaswamy, V.; Georgas, N.; Blumberg, A. F.; Wang, Y.

    2016-12-01

    Advances in computational resources and modeling techniques are opening the path to effectively integrate existing complex models. In the context of flood prediction, recent extreme events have demonstrated the importance of integrating components of the hydrosystem to better represent the interactions amongst different physical processes and phenomena. As such, there is a pressing need to develop holistic and cross-disciplinary modeling frameworks that effectively integrate existing models and better represent the operative dynamics. This work presents a novel Hydrologic-Hydraulic-Hydrodynamic Ensemble (H3E) flood prediction framework that operationally integrates existing predictive models representing coastal (New York Harbor Observing and Prediction System, NYHOPS), hydrologic (US Army Corps of Engineers Hydrologic Modeling System, HEC-HMS) and hydraulic (2-dimensional River Analysis System, HEC-RAS) components. The state-of-the-art framework is forced with 125 ensemble meteorological inputs from numerical weather prediction models including the Global Ensemble Forecast System, the European Centre for Medium-Range Weather Forecasts (ECMWF), the Canadian Meteorological Centre (CMC), the Short Range Ensemble Forecast (SREF) and the North American Mesoscale Forecast System (NAM). The framework produces, within a 96-hour forecast horizon, on-the-fly Google Earth flood maps that provide critical information for decision makers and emergency preparedness managers. The utility of the framework was demonstrated by retrospectively forecasting an extreme flood event, hurricane Sandy in the Passaic and Hackensack watersheds (New Jersey, USA). Hurricane Sandy caused significant damage to a number of critical facilities in this area including the New Jersey Transit's main storage and maintenance facility. The results of this work demonstrate that ensemble based frameworks provide improved flood predictions and useful information about associated uncertainties, thus improving the assessment of risks as when compared to a deterministic forecast. The work offers perspectives for short-term flood forecasts, flood mitigation strategies and best management practices for climate change scenarios.

  4. Telescience Resource Kit Software Capabilities and Future Enhancements

    NASA Technical Reports Server (NTRS)

    Schneider, Michelle

    2004-01-01

    The Telescience Resource Kit (TReK) is a suite of PC-based software applications that can be used to monitor and control a payload on board the International Space Station (ISS). This software provides a way for payload users to operate their payloads from their home sites. It can be used by an individual or a team of people. TReK provides both local ground support system services and an interface to utilize remote services provided by the Payload Operations Integration Center (POIC). by the POIC and to perform local data functions such as processing the data, storing it in local files, and forwarding it to other computer systems. TReK can also be used to build, send, and track payload commands. In addition to these features, work is in progress to add a new command management capability. This capability will provide a way to manage a multi- platform command environment that can include geographically distributed computers. This is intended to help those teams that need to manage a shared on-board resource such as a facility class payload. The environment can be configured such that one individual can manage all the command activities associated with that payload. This paper will provide a summary of existing TReK capabilities and a description of the new command management capability. For example, 7'ReK can be used to receive payload data distributed

  5. Scrapping Patched Computer Systems: Integrated Data Processing for Information Management.

    ERIC Educational Resources Information Center

    Martinson, Linda

    1991-01-01

    Colleges and universities must find a way to streamline and integrate information management processes across the organization. The Georgia Institute of Technology responded to an acute problem of dissimilar operating systems with a campus-wide integrated administrative system using a machine independent relational database management system. (MSE)

  6. BIBLIO: A Computer System Designed to Support the Near-Library User Model of Information Retrieval.

    ERIC Educational Resources Information Center

    Belew, Richard K.; Holland, Maurita Peterson

    1988-01-01

    Description of the development of the Information Exchange Facility, a prototype microcomputer-based personal bibliographic facility, covers software selection, user selection, overview of the system, and evaluation. The plan for an integrated system, BIBLIO, and the future role of libraries are discussed. (eight references) (MES)

  7. Analysis of accident sequences and source terms at waste treatment and storage facilities for waste generated by U.S. Department of Energy Waste Management Operations, Volume 3: Appendixes C-H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.

    1995-04-01

    This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less

  8. Challenges in integrating multidisciplinary data into a single e-infrastructure

    NASA Astrophysics Data System (ADS)

    Atakan, Kuvvet; Jeffery, Keith G.; Bailo, Daniele; Harrison, Matthew

    2015-04-01

    The European Plate Observing System (EPOS) aims to create a pan-European infrastructure for solid Earth science to support a safe and sustainable society. The mission of EPOS is to monitor and understand the dynamic and complex Earth system by relying on new e-science opportunities and integrating diverse and advanced Research Infrastructures in Europe for solid Earth Science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. Through integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. EPOS is now getting into its Implementation Phase (EPOS-IP). One of the main challenges during the implementation phase is the integration of multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into a platform "the ICS system" that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. This requires dedicated tasks for interactions with the various TCS-WPs, as well as the various distributed ICS (ICS-Ds), such as High Performance Computing (HPC) facilities, large scale data storage facilities, complex processing and visualization tools etc. Computational Earth Science (CES) services are identified as a transversal activity and as such need to be harmonized and provided within the ICS. In order to develop a metadata catalogue and the ICS system, the content from the entire spectrum of services included in TCS, ICS-Ds as well as CES activities, need to be organized in a systematic manner taking into account global and European IT-standards, while complying with the user needs and data provider requirements.

  9. Deep learning for cardiac computer-aided diagnosis: benefits, issues & solutions.

    PubMed

    Loh, Brian C S; Then, Patrick H H

    2017-01-01

    Cardiovascular diseases are one of the top causes of deaths worldwide. In developing nations and rural areas, difficulties with diagnosis and treatment are made worse due to the deficiency of healthcare facilities. A viable solution to this issue is telemedicine, which involves delivering health care and sharing medical knowledge at a distance. Additionally, mHealth, the utilization of mobile devices for medical care, has also proven to be a feasible choice. The integration of telemedicine, mHealth and computer-aided diagnosis systems with the fields of machine and deep learning has enabled the creation of effective services that are adaptable to a multitude of scenarios. The objective of this review is to provide an overview of heart disease diagnosis and management, especially within the context of rural healthcare, as well as discuss the benefits, issues and solutions of implementing deep learning algorithms to improve the efficacy of relevant medical applications.

  10. Deep learning for cardiac computer-aided diagnosis: benefits, issues & solutions

    PubMed Central

    Then, Patrick H. H.

    2017-01-01

    Cardiovascular diseases are one of the top causes of deaths worldwide. In developing nations and rural areas, difficulties with diagnosis and treatment are made worse due to the deficiency of healthcare facilities. A viable solution to this issue is telemedicine, which involves delivering health care and sharing medical knowledge at a distance. Additionally, mHealth, the utilization of mobile devices for medical care, has also proven to be a feasible choice. The integration of telemedicine, mHealth and computer-aided diagnosis systems with the fields of machine and deep learning has enabled the creation of effective services that are adaptable to a multitude of scenarios. The objective of this review is to provide an overview of heart disease diagnosis and management, especially within the context of rural healthcare, as well as discuss the benefits, issues and solutions of implementing deep learning algorithms to improve the efficacy of relevant medical applications. PMID:29184897

  11. Integrating Green Purchasing Into Your Environmental Management System (EMS)

    EPA Pesticide Factsheets

    The goal of this report is to help Federal facilities integrate green purchasing into their EMS. The intended audience includes those tasked with implementing an EMS, reducing environmental impacts, meeting green purchasing requirements.

  12. Management support and perceived consumer satisfaction in skilled nursing facilities.

    PubMed

    Metlen, Scott; Eveleth, Daniel; Bailey, Jeffrey J

    2005-08-01

    How managers 'manage' employees influences important firm outcomes. Heskett, Sasser, and Schlesinger contend that the level of internal support for service workers will influence consumer satisfaction. This study empirically explores how skilled nursing facility (SNF) managers affect consumer satisfaction by encouraging employee effectiveness and listening to employees to determine how to improve employee effectiveness. We extend previous research by proposing management as a form of internal support and demonstrating its relationship to service process integration, as a distinct form of internal support. The results of our individual-level investigation of 630 nursing assistants from 45 SNFs provide support for our two-part hypothesis. First, active management support and process integration, as elements of internal support, do lead to increased employee satisfaction and employee effectiveness. Second, the increased employee satisfaction and effectiveness was positively related to consumer satisfaction, as evaluated by the service workers. Thus, there is a positive influence of management's internal support of nursing assistants on perceived consumer satisfaction.

  13. LUMIS Interactive graphics operating instructions and system specifications

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Yu, T. C.; Landini, A. J.

    1976-01-01

    The LUMIS program has designed an integrated geographic information system to assist program managers and planning groups in metropolitan regions. Described is the system designed to interactively interrogate a data base, display graphically a portion of the region enclosed in the data base, and perform cross tabulations of variables within each city block, block group, or census tract. The system is designed to interface with U. S. Census DIME file technology, but can accept alternative districting conventions. The system is described on three levels: (1) introduction to the systems's concept and potential applications; (2) the method of operating the system on an interactive terminal; and (3) a detailed system specification for computer facility personnel.

  14. Emerging CAE technologies and their role in Future Ambient Intelligence Environments

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2011-03-01

    Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.

  15. Computers Help Technicians Become Managers.

    ERIC Educational Resources Information Center

    Instructional Innovator, 1984

    1984-01-01

    Briefly describes the Academy of Advanced Traffic's use of the Numerax electronic tariff library in financial management, business logistics management, and warehousing courses to familiarize future traffic managers with time saving computer-based information systems that will free them to become integral members of their company's decision-making…

  16. Development of an Integrated Leachate Treatment Solution for the Port Granby Waste Management Facility - 12429

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conroy, Kevin W.; Vandergaast, Gerald

    2012-07-01

    The Port Granby Project (the Project) is located near the north shore of Lake Ontario in the Municipality of Clarington, Ontario, Canada. The Project consists of relocating approximately 450,000 m{sup 3} of historic Low-Level Radioactive Waste (LLRW) and contaminated soil from the existing Port Granby Waste Management Facility (WMF) to a proposed Long-Term Waste Management Facility (LTWMF) located adjacent to the WMF. The LTWMF will include an engineered waste containment facility, a Wastewater Treatment Plant (WTP), and other ancillary facilities. A series of bench- and pilot-scale test programs have been conducted to identify preferred treatment processes to be incorporated intomore » the WTP to treat wastewater generated during the construction, closure and post-closure periods at the WMF/LTWMF. (authors)« less

  17. EPA Facility Registry Service (FRS): CERCLIS

    EPA Pesticide Factsheets

    This data provides location and attribute information on Facilities regulated under the Comprehensive Environmental Responsibility Compensation and Liability Information System (CERCLIS) for a intranet web feature service . The data provided in this service are obtained from EPA's Facility Registry Service (FRS). The FRS is an integrated source of comprehensive (air, water, and waste) environmental information about facilities, sites or places. This service connects directly to the FRS database to provide this data as a feature service. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous verification and management procedures that incorporate information from program national systems, state master facility records, data collected from EPA's Central Data Exchange registrations and data management personnel. Additional Information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  18. Computer-Aided Design Speeds Development of Safe, Affordable, and Efficient

    Science.gov Websites

    Systems Integration Facility's 3-D visualization room. Photo by Dennis Schroeder, NREL 41705 Computer from industry, academia, national laboratories, and other research institutions. Photo by Dennis Dennis Schroeder, NREL 41483 Bringing CAEBAT to the Next Level CAEBAT teams are now working to

  19. 34 CFR 607.10 - What activities may and may not be carried out under a grant?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., including the integration of computer technology into institutional facilities to create smart buildings... academic programs or methodology, including computer-assisted instruction, that strengthen the academic... new technology or methodology to increase student success and retention or to retain accreditation; or...

  20. 34 CFR 607.10 - What activities may and may not be carried out under a grant?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., including the integration of computer technology into institutional facilities to create smart buildings... academic programs or methodology, including computer-assisted instruction, that strengthen the academic... new technology or methodology to increase student success and retention or to retain accreditation; or...

  1. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria.

    PubMed

    Mbah, Henry; Negedu-Momoh, Olubunmi Ruth; Adedokun, Oluwasanmi; Ikani, Patrick Anibbe; Balogun, Oluseyi; Sanwo, Olusola; Ochei, Kingsley; Ekanem, Maurice; Torpey, Kwasi

    2014-01-01

    The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration. A quantitative before-and-after study conducted in 122 Family Health International (FHI360) supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration) for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and <25% NO integration). Weaknesses were noted and addressed. We analyzed 9 (7.4%) primary, 104 (85.2%) secondary and 9 (7.4%) tertiary level facilities. There were statistically significant differences in integration levels between baseline and 3 months follow-up period (p<0.01). Baseline median total integration score was 4 (IQR 3 to 5) compared to 7 (IQR 4 to 9) at 3 months follow-up (p = 0.000). Partial and fully integrated laboratory systems were 64 (52.5%) and 0 (0.0%) at baseline, compared to 100 (82.0%) and 3 (2.4%) respectively at 3 months follow-up (p = 0.000). This project showcases our novel approach to measure the status of each laboratory on the integration continuum.

  2. The palliative care scorecard as an innovative approach in long-term care

    PubMed Central

    Esslinger, Adelheid Sussanne; Alzinger, Dagmar; Rager, Edeltraud

    2009-01-01

    Introduction In long-term care facilities professional concepts for palliative care are of great interest as individual needs of clients (residents, relatives, and friends) are in the focus of services. Case Within a long-term care facility of the Red Cross Organization in Germany, we developed a palliative care concept in 2008. It is integrated in the strategy of the whole organization. As the strategic management concept is based on the balanced scorecard, we introduced a palliative care scorecard. The facility offers 200 places for residents. It has established 27 strategic targets to achieve. One of these is to provide individual care. Another one is to integrate relatives of residents. One more deals with the integration of volunteers. We decided to implement a palliative care concept within the target system (e.g. develop individual pain therapy, create and coordinate interdisciplinary palliative care teams). Results The case shows how it is possible to integrate and strengthen the subject of palliative care within the existing management system of the organization. In order to translate the concept into action, it will be necessary to change the organizational culture into an ‘open minded house’. This especially means that all members of the organization have to be trained and sensitized for the matters of care at the end of life. Conclusion The development and implementation of an integrated concept of palliative care, which fits into the existing management system, is the base of a sustainable offer of specialized care for the residents and their social network. Therefore, not only the quality of care and life of the clients, but also the surviving of the facility on the market of care will be assured.

  3. Systems Check: Community Colleges Turn to Facilities Assessments to Plan Capital Projects and Avoid Expensive Emergency Repairs

    ERIC Educational Resources Information Center

    Joch, Alan

    2014-01-01

    With an emphasis on planning and cutting costs to make better use of resources, facilities managers at community colleges across the nation have undertaken facilities audits usually with the help of outside engineers. Such assessments analyze the history and structural integrity of buildings and core components on campus, including heating…

  4. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  5. An integrated decision support system for TRAC: A proposal

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Optimal allocation and usage of resources is a key to effective management. Resources of concern to TRAC are: Manpower (PSY), Money (Travel, contracts), Computing, Data, Models, etc. Management activities of TRAC include: Planning, Programming, Tasking, Monitoring, Updating, and Coordinating. Existing systems are insufficient, not completely automated, manpower intensive, and has the potential for data inconsistency exists. A system is proposed which suggests a means to integrate all project management activities of TRAC through the development of a sophisticated software and by utilizing the existing computing systems and network resources. The systems integration proposal is examined in detail.

  6. Lowering the Barriers to Using Data: Enabling Desktop-based HPD Science through Virtual Environments and Web Data Services

    NASA Astrophysics Data System (ADS)

    Druken, K. A.; Trenham, C. E.; Steer, A.; Evans, B. J. K.; Richards, C. J.; Smillie, J.; Allen, C.; Pringle, S.; Wang, J.; Wyborn, L. A.

    2016-12-01

    The Australian National Computational Infrastructure (NCI) provides access to petascale data in climate, weather, Earth observations, and genomics, and terascale data in astronomy, geophysics, ecology and land use, as well as social sciences. The data is centralized in a closely integrated High Performance Computing (HPC), High Performance Data (HPD) and cloud facility. Despite this, there remain significant barriers for many users to find and access the data: simply hosting a large volume of data is not helpful if researchers are unable to find, access, and use the data for their particular need. Use cases demonstrate we need to support a diverse range of users who are increasingly crossing traditional research discipline boundaries. To support their varying experience, access needs and research workflows, NCI has implemented an integrated data platform providing a range of services that enable users to interact with our data holdings. These services include: - A GeoNetwork catalog built on standardized Data Management Plans to search collection metadata, and find relevant datasets; - Web data services to download or remotely access data via OPeNDAP, WMS, WCS and other protocols; - Virtual Desktop Infrastructure (VDI) built on a highly integrated on-site cloud with access to both the HPC peak machine and research data collections. The VDI is a fully featured environment allowing visualization, code development and analysis to take place in an interactive desktop environment; and - A Learning Management System (LMS) containing User Guides, Use Case examples and Jupyter Notebooks structured into courses, so that users can self-teach how to use these facilities with examples from our system across a range of disciplines. We will briefly present these components, and discuss how we engage with data custodians and consumers to develop standardized data structures and services that support the range of needs. We will also highlight some key developments that have improved user experience in utilizing the services, particularly enabling transdisciplinary science. This work combines with other developments at NCI to increase the confidence of scientists from any field to undertake research and analysis on these important data collections regardless of their preferred work environment or level of skill.

  7. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  8. Sustainability of the integrated chronic disease management model at primary care clinics in South Africa.

    PubMed

    Mahomed, Ozayr H; Asmall, Shaidah; Voce, Anna

    2016-11-17

    An integrated chronic disease management (ICDM) model consisting of four components (facility reorganisation, clinical supportive management, assisted self-supportive management and strengthening of support systems and structures outside the facility) has been implemented across 42 primary health care clinics in South Africa with a view to improve the operational efficiency and patient clinical outcomes. The aim of this study was to assess the sustainability of the facility reorganisation and clinical support components 18 months after the initiation. The study was conducted at 37 of the initiating clinics across three districts in three provinces of South Africa. The National Health Service (NHS) Institute for Innovation and Improvement Sustainability Model (SM) self-assessment tool was used to assess sustainability. Bushbuckridge had the highest mean sustainability score of 71.79 (95% CI: 63.70-79.89) followed by West Rand Health District (70.25 (95% CI: 63.96-76.53)) and Dr Kenneth Kaunda District (66.50 (95% CI: 55.17-77.83)). Four facilities (11%) had an overall sustainability score of less than 55. The less than optimal involvement of clinical leadership (doctors), negative staff behaviour towards the ICDM, adaptability or flexibility of the model to adapt to external factors and infrastructure limitation have the potential to negatively affect the sustainability and scale-up of the model.

  9. Petabyte Class Storage at Jefferson Lab (CEBAF)

    NASA Technical Reports Server (NTRS)

    Chambers, Rita; Davis, Mark

    1996-01-01

    By 1997, the Thomas Jefferson National Accelerator Facility will collect over one Terabyte of raw information per day of Accelerator operation from three concurrently operating Experimental Halls. When post-processing is included, roughly 250 TB of raw and formatted experimental data will be generated each year. By the year 2000, a total of one Petabyte will be stored on-line. Critical to the experimental program at Jefferson Lab (JLab) is the networking and computational capability to collect, store, retrieve, and reconstruct data on this scale. The design criteria include support of a raw data stream of 10-12 MB/second from Experimental Hall B, which will operate the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS). Keeping up with this data stream implies design strategies that provide storage guarantees during accelerator operation, minimize the number of times data is buffered allow seamless access to specific data sets for the researcher, synchronize data retrievals with the scheduling of postprocessing calculations on the data reconstruction CPU farms, as well as support the site capability to perform data reconstruction and reduction at the same overall rate at which new data is being collected. The current implementation employs state-of-the-art StorageTek Redwood tape drives and robotics library integrated with the Open Storage Manager (OSM) Hierarchical Storage Management software (Computer Associates, International), the use of Fibre Channel RAID disks dual-ported between Sun Microsystems SMP servers, and a network-based interface to a 10,000 SPECint92 data processing CPU farm. Issues of efficiency, scalability, and manageability will become critical to meet the year 2000 requirements for a Petabyte of near-line storage interfaced to over 30,000 SPECint92 of data processing power.

  10. Overview of the Life Science Glovebox (LSG) Facility and the Research Performed in the LSG

    NASA Technical Reports Server (NTRS)

    Cole, J. Michael; Young, Yancy

    2016-01-01

    The Life Science Glovebox (LSG) is a rack facility currently under development with a projected availability for International Space Station (ISS) utilization in the FY2018 timeframe. Development of the LSG is being managed by the Marshal Space Flight Center (MSFC) with support from Ames Research Center (ARC) and Johnson Space Center (JSC). The MSFC will continue management of LSG operations, payload integration, and sustaining following delivery to the ISS. The LSG will accommodate life science and technology investigations in a "workbench" type environment. The facility has a.Ii enclosed working volume that is held at a negative pressure with respect to the crew living area. This allows the facility to provide two levels of containment for handling Biohazard Level II and lower biological materials. This containment approach protects the crew from possible hazardous operations that take place inside the LSG work volume. Research investigations operating inside the LSG are provided approximately 15 cubic feet of enclosed work space, 350 watts of28Vdc and l IOVac power (combined), video and data recording, and real time downlink. These capabilities will make the LSG a highly utilized facility on ISS. The LSG will be used for biological studies including rodent research and cell biology. The LSG facility is operated by the Payloads Operations Integration Center at MSFC. Payloads may also operate remotely from different telescience centers located in the United States and different countries. The Investigative Payload Integration Manager (IPIM) is the focal to assist organizations that have payloads operating in the LSG facility. NASA provides an LSG qualification unit for payload developers to verify that their hardware is operating properly before actual operation on the ISS. This poster will provide an overview of the LSG facility and a synopsis of the research that will be accomplished in the LSG. The authors would like to acknowledge Ames Research Center, Johnson Space Center, Teledyne Brown Engineering, MOOG-Bradford Engineering and the entire LSG Team for their inputs into this abstract.

  11. Quality of child healthcare at primary healthcare facilities: a national assessment of the Integrated Management of Childhood Illnesses in Afghanistan.

    PubMed

    Mansoor, Ghulam Farooq; Chikvaidze, Paata; Varkey, Sherin; Higgins-Steele, Ariel; Safi, Najibullah; Mubasher, Adela; Yusufi, Khaksar; Alawi, Sayed Alisha

    2017-02-01

    To assess quality of the national Integrated Management of Childhood Illness (IMCI) program services provided for sick children at primary health facilities in Afghanistan. Mixed methods including cross-sectional study. Thirteen (of thirty-four) provinces in Afghanistan. Observation of case management and re-examination of 177 sick children, exit interviews with caretakers and review of equipment/supplies at 44 health facilities. Introduction and scale up of Integrated Management of Childhood Illnesses at primary health care facilities. Care of sick children according to IMCI guidelines, health worker skills and essential health system elements. Thirty-two (71%) of the health workers were trained in IMCI and five (11%) received supervision in clinical case management during the past 6 months. On average, 5.4 out of 10 main assessment tasks were performed during cases observed, the index being higher in children seen by trained providers than untrained (6.3 vs 3.5, 95% CI 5.8-6.8 vs 2.9-4.1). In all, 74% of the 104 children who needed oral antibiotics received prescriptions, while 30% received complete and correct advice and 30% were overprescribed, and more so by untrained providers. Home care counseling was associated with provider training status (41.3% by trained and 24.5% by untrained). Essential oral and pre-referral injectable medicine and equipment/supplies were available in 66%, 23%, and 45% of health facilities, respectively. IMCI training improved assessment, rational use of antibiotics and counseling; further investment in IMCI in Afghanistan, continuing provider capacity building and supportive supervision for improved quality of care and counseling for sick children is needed, especially given high burden treatable childhood illness. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  12. Satellite remote sensing for hydrology and water management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, E.C.; Power, C.H.; Micallef, A.

    Interest in satellite remote sensing is fast moving away from pure science and individual case studies towards truly operational applications. At the same time the micro-computer revolution is ensuring that data reception and processing facilities need no longer be the preserve of a small number of global centers, but can be common-place installations in smaller countries and even local regional agency offices or laboratories. As remote sensing matures, and its applications proliferate, a new type of treatment is required to ensure both that decision makers, managers and engineers with problems to solve are informed of today's opportunities and that scientistsmore » are provided with integrated overviews of the ever-growing need for their services. This book addresses these needs uniquely focusing on the area bounded by satellite remote sensing, pure and applied hydrological sciences, and a specific world region, namely the Mediterranean basin.« less

  13. Software Manages Documentation in a Large Test Facility

    NASA Technical Reports Server (NTRS)

    Gurneck, Joseph M.

    2001-01-01

    The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.

  14. A secure file manager for UNIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less

  15. Re-engineering Nascom's network management architecture

    NASA Technical Reports Server (NTRS)

    Drake, Brian C.; Messent, David

    1994-01-01

    The development of Nascom systems for ground communications began in 1958 with Project Vanguard. The low-speed systems (rates less than 9.6 Kbs) were developed following existing standards; but, there were no comparable standards for high-speed systems. As a result, these systems were developed using custom protocols and custom hardware. Technology has made enormous strides since the ground support systems were implemented. Standards for computer equipment, software, and high-speed communications exist and the performance of current workstations exceeds that of the mainframes used in the development of the ground systems. Nascom is in the process of upgrading its ground support systems and providing additional services. The Message Switching System (MSS), Communications Address Processor (CAP), and Multiplexer/Demultiplexer (MDM) Automated Control System (MACS) are all examples of Nascom systems developed using standards such as, X-windows, Motif, and Simple Network Management Protocol (SNMP). Also, the Earth Observing System (EOS) Communications (Ecom) project is stressing standards as an integral part of its network. The move towards standards has produced a reduction in development, maintenance, and interoperability costs, while providing operational quality improvement. The Facility and Resource Manager (FARM) project has been established to integrate the Nascom networks and systems into a common network management architecture. The maximization of standards and implementation of computer automation in the architecture will lead to continued cost reductions and increased operational efficiency. The first step has been to derive overall Nascom requirements and identify the functionality common to all the current management systems. The identification of these common functions will enable the reuse of processes in the management architecture and promote increased use of automation throughout the Nascom network. The MSS, CAP, MACS, and Ecom projects have indicated the potential value of commercial-off-the-shelf (COTS) and standards through reduced cost and high quality. The FARM will allow the application of the lessons learned from these projects to all future Nascom systems.

  16. Integration and use of Microgravity Research Facility: Lessons learned by the crystals by vapor transport experiment and Space Experiments Facility programs

    NASA Technical Reports Server (NTRS)

    Heizer, Barbara L.

    1992-01-01

    The Crystals by Vapor Transport Experiment (CVTE) and Space Experiments Facility (SEF) are materials processing facilities designed and built for use on the Space Shuttle mid deck. The CVTE was built as a commercial facility owned by the Boeing Company. The SEF was built under contract to the UAH Center for Commercial Development of Space (CCDS). Both facilities include up to three furnaces capable of reaching 850 C minimum, stand-alone electronics and software, and independent cooling control. In addition, the CVTE includes a dedicated stowage locker for cameras, a laptop computer, and other ancillary equipment. Both systems are designed to fly in a Middeck Accommodations Rack (MAR), though the SEF is currently being integrated into a Spacehab rack. The CVTE hardware includes two transparent furnaces capable of achieving temperatures in the 850 to 870 C range. The transparent feature allows scientists/astronauts to directly observe and affect crystal growth both on the ground and in space. Cameras mounted to the rack provide photodocumentation of the crystal growth. The basic design of the furnace allows for modification to accommodate techniques other than vapor crystal growth. Early in the CVTE program, the decision was made to assign a principal scientist to develop the experiment plan, affect the hardware/software design, run the ground and flight research effort, and interface with the scientific community. The principal scientist is responsible to the program manager and is a critical member of the engineering development team. As a result of this decision, the hardware/experiment requirements were established in such a way as to balance the engineering and science demands on the equipment. Program schedules for hardware development, experiment definition and material selection, flight operations development and crew training, both ground support and astronauts, were all planned and carried out with the understanding that the success of the program science was as important as the hardware functionality. How the CVTE payload was designed and what it is capable of, the philosophy of including the scientists in design and operations decisions, and the lessons learned during the integration process are descussed.

  17. Bidding-based autonomous process planning and scheduling

    NASA Astrophysics Data System (ADS)

    Gu, Peihua; Balasubramanian, Sivaram; Norrie, Douglas H.

    1995-08-01

    Improving productivity through computer integrated manufacturing systems (CIMS) and concurrent engineering requires that the islands of automation in an enterprise be completely integrated. The first step in this direction is to integrate design, process planning, and scheduling. This can be achieved through a bidding-based process planning approach. The product is represented in a STEP model with detailed design and administrative information including design specifications, batch size, and due dates. Upon arrival at the manufacturing facility, the product registered in the shop floor manager which is essentially a coordinating agent. The shop floor manager broadcasts the product's requirements to the machines. The shop contains autonomous machines that have knowledge about their functionality, capabilities, tooling, and schedule. Each machine has its own process planner and responds to the product's request in a different way that is consistent with its capabilities and capacities. When more than one machine offers certain process(es) for the same requirements, they enter into negotiation. Based on processing time, due date, and cost, one of the machines wins the contract. The successful machine updates its schedule and advises the product to request raw material for processing. The concept was implemented using a multi-agent system with the task decomposition and planning achieved through contract nets. The examples are included to illustrate the approach.

  18. Current Issues for Higher Education Information Resources Management.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1996

    1996-01-01

    Issues identified as important to the future of information resources management and use in higher education include information policy in a networked environment, distributed computing, integrating information resources and college planning, benchmarking information technology, integrated digital libraries, technology integration in teaching,…

  19. NASA Lighting Research, Test, & Analysis

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    The Habitability and Human Factors Branch, at Johnson Space Center, in Houston, TX, provides technical guidance for the development of spaceflight lighting requirements, verification of light system performance, analysis of integrated environmental lighting systems, and research of lighting-related human performance issues. The Habitability & Human Factors Lighting Team maintains two physical facilities that are integrated to provide support. The Lighting Environment Test Facility (LETF) provides a controlled darkroom environment for physical verification of lighting systems with photometric and spetrographic measurement systems. The Graphics Research & Analysis Facility (GRAF) maintains the capability for computer-based analysis of operational lighting environments. The combined capabilities of the Lighting Team at Johnson Space Center have been used for a wide range of lighting-related issues.

  20. PLM in the context of the maritime virtual education

    NASA Astrophysics Data System (ADS)

    Raicu, Alexandra; Oanta, Emil M.

    2016-12-01

    This paper presents new approaches regarding the use of Product Lifecycle Management concept to achieve knowledge integration of the academic disciplines in the maritime education context. The philosophy of the educational system is now changing faster worldwide and it is in a continuous developing process. There is a demand to develop modern educational facilities for CAD/CAE/CAM training of the future maritime engineers, which offers collaborative environments between the academic disciplines and the teachers. It is well known that the students must understand the importance of the connectivity between the academic disciplines and the computer aided methods to interface them. Thus, besides the basic knowledge and competences acquired from the CAD courses, students learn how to increase the design productivity, to create a parametric design, the original instruments of automatic design, 3D printing methods, how to interface the CAD/CAE/CAM applications. As an example, the Strength of Materials discipline briefly presents alternate computer aided methods to compute the geometrical characteristics of the cross sections using the CAD geometry, creation the free body diagrams and presentation the deflected shapes of various educational models, including the rotational effect when the forces are not applied in the shear center, using the results of the FEM applications. During the computer aided engineering academic disciplines, after the students design and analyze a virtual 3D model they can convert it into a physical object using 3D printing method. Constanta Maritime University offers a full understanding of the concept of Product Lifecycle Management, collaborative creation, management and dissemination.

  1. Cryogenic Fluid Management Facility

    NASA Technical Reports Server (NTRS)

    Eberhardt, R. N.; Bailey, W. J.; Symons, E. P.; Kroeger, E. W.

    1984-01-01

    The Cryogenic Fluid Management Facility (CFMF) is a reusable test bed which is designed to be carried into space in the Shuttle cargo bay to investigate systems and technologies required to efficiently and effectively manage cryogens in space. The facility hardware is configured to provide low-g verification of fluid and thermal models of cryogenic storage, transfer concepts and processes. Significant design data and criteria for future subcritical cryogenic storage and transfer systems will be obtained. Future applications include space-based and ground-based orbit transfer vehicles (OTV), space station life support, attitude control, power and fuel depot supply, resupply tankers, external tank (ET) propellant scavenging, space-based weapon systems and space-based orbit maneuvering vehicles (OMV). This paper describes the facility and discusses the cryogenic fluid management technology to be investigated. A brief discussion of the integration issues involved in loading and transporting liquid hydrogen within the Shuttle cargo bay is also included.

  2. Computer-Based Learning of Geometry from Integrated and Split-Attention Worked Examples: The Power of Self-Management

    ERIC Educational Resources Information Center

    Tindall-Ford, Sharon; Agostinho, Shirley; Bokosmaty, Sahar; Paas, Fred; Chandler, Paul

    2015-01-01

    This research investigated the viability of learning by self-managing split-attention worked examples as an alternative to learning by studying instructor-managed integrated worked examples. Secondary school students learning properties of angles on parallel lines were taught to integrate spatially separated text and diagrammatic information by…

  3. Integrating Free Computer Software in Chemistry and Biochemistry Instruction: An International Collaboration

    ERIC Educational Resources Information Center

    Cedeno, David L.; Jones, Marjorie A.; Friesen, Jon A.; Wirtz, Mark W.; Rios, Luz Amalia; Ocampo, Gonzalo Taborda

    2010-01-01

    At the Universidad de Caldas, Manizales, Colombia, we used their new computer facilities to introduce chemistry graduate students to biochemical database mining and quantum chemistry calculations using freeware. These hands-on workshops allowed the students a strong introduction to easily accessible software and how to use this software to begin…

  4. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  5. Tufts academic health information network: concept and scenario.

    PubMed

    Stearns, N S

    1986-04-01

    Tufts University School of Medicine's new health sciences education building, the Arthur M. Sackler Center for Health Communications, will house a modern medical library and computer center, classrooms, auditoria, and media facilities. The building will also serve as the center for an information and communication network linking the medical school and adjacent New England Medical Center, Tufts' primary teaching hospital, with Tufts Associated Teaching Hospitals throughout New England. Ultimately, the Tufts network will join other gateway networks, information resource facilities, health care institutions, and medical schools throughout the world. The center and the network are intended to facilitate and improve the education of health professionals, the delivery of health care to patients, the conduct of research, and the implementation of administrative management approaches that should provide more efficient utilization of resources and save dollars. A model and scenario show how health care delivery and health care education are integrated through better use of information transfer technologies by health information specialists, practitioners, and educators.

  6. Tufts academic health information network: concept and scenario.

    PubMed Central

    Stearns, N S

    1986-01-01

    Tufts University School of Medicine's new health sciences education building, the Arthur M. Sackler Center for Health Communications, will house a modern medical library and computer center, classrooms, auditoria, and media facilities. The building will also serve as the center for an information and communication network linking the medical school and adjacent New England Medical Center, Tufts' primary teaching hospital, with Tufts Associated Teaching Hospitals throughout New England. Ultimately, the Tufts network will join other gateway networks, information resource facilities, health care institutions, and medical schools throughout the world. The center and the network are intended to facilitate and improve the education of health professionals, the delivery of health care to patients, the conduct of research, and the implementation of administrative management approaches that should provide more efficient utilization of resources and save dollars. A model and scenario show how health care delivery and health care education are integrated through better use of information transfer technologies by health information specialists, practitioners, and educators. PMID:3708191

  7. Wind Energy Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurie, Carol

    2017-02-01

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  8. Wind Energy Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Office of Energy Efficiency and Renewable Energy

    This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.

  9. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  10. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  11. 2014 QuickCompass of TRICARE Child Beneficiaries: Utilization of Medicaid-Waivered Services. Tabulation of Responses

    DTIC Science & Technology

    2014-08-30

    management) Long term care (e.g., home health care, hospice, integrated personal care, intermediate care facilities for the mentally retarded, nurse ... aide training and testing, and nursing facilities) Medical equipment (e.g., medically necessary supplies, including oxygen, catheters, and reusable

  12. The Post-Dam System. Volume 5. Harvard Project Manager (HPM).

    DTIC Science & Technology

    1992-10-01

    cQllected and analyzed to determine structural integrity and usability. From this analysis, a repair schedule is developed. This is currently a time...information on mission-critical facility damage is collected and analyzed to determine structural integrity and usability. From this analysis, a repair...to determine repair strategies with an expert system, keep track of materials and equipment with a relational database management system, and

  13. Naver: a PC-cluster-based VR system

    NASA Astrophysics Data System (ADS)

    Park, ChangHoon; Ko, HeeDong; Kim, TaiYun

    2003-04-01

    In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.

  14. Consolidation and Centralization of Waste Operations Business Systems - 12319

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newton, D. Dean

    This abstract provides a comprehensive plan supporting the continued development and integration of all waste operations and waste management business systems. These include existing systems such as ATMS (Automated Transportation Management System), RadCalc, RFITS (Radio Frequency Identification Transportation System) Programs as well as incorporating key components of existing government developed waste management systems and COTS (Computer Off The Shelf) applications in order to deliver a truly integrated waste tracking and management business system. Some of these existing systems to be integrated include IWTS at Idaho National Lab, WIMS at Sandia National Lab and others. The aggregation of data and consolidationmore » into a single comprehensive business system delivers best practices in lifecycle waste management processes to be delivered across the Department of Energy facilities. This concept exists to reduce operational costs to the federal government by combining key business systems into a centralized enterprise application following the methodology that as contractors change, the tools they use to manage DOE's assets do not. IWITS is one efficient representation of a sound architecture currently supporting multiple DOE sites from a waste management solution. The integration of ATMS, RadCalc and RFITS and the concept like IWITS into a single solution for DOE contractors will result in significant savings and increased efficiencies for DOE. Building continuity and solving collective problems can only be achieved through mass collaboration, resulting in an online community that DOE contractors and subcontractors access common applications, allowing for the collection of business intelligence at an unprecedented level. This is a fundamental shift from a solely 'for profit' business model to a 'for purpose' business model. To the conventional-minded, putting values before profit is an unfamiliar and unnatural way for a contractor to operate - unless however; your objective is to build a strong, strategic alliance across the enterprise in order to execute an unprecedented change in waste management, transportation and logistical operations. The success of such an initiative can be achieved by creating a responsible framework by enabling key individuals to 'own' the sustainability of the program. This includes the strategic collaboration of responsible revolutionaries covering application developers, information owners and federal stakeholders to ensure compliance, security and risk management are 'baked' into the process and sustainability is fostered through continued innovation by both technology and application functionality. This ensures that working software can adapt to changing circumstances and is the principle measure of the success of the program. The consolidation of waste management business systems must be achieved in order to realize efficiencies in information technology portfolio management, data integrity, business intelligence and the lifecycle management of hazardous materials within the DOE enterprise architecture. By identifying best practices across the enterprise and aggregating computational and application development resources, you can provide a unified, holistic solution serviceable from a single location while being accessed from anywhere. The business impact of integrating and delivering a unified solution would reduce costs to the Department of Energy within the first year of deployment with increased savings annually. (author)« less

  15. Comparison of groundwater flow in Southern California coastal aquifers

    USGS Publications Warehouse

    Hanson, Randall T.; Izbicki, John A.; Reichard, Eric G.; Edwards, Brian D.; Land, Michael; Martin, Peter

    2009-01-01

    Maintaining the sustainability of Southern California coastal aquifers requires joint management of surface water and groundwater (conjunctive use). This requires new data collection and analyses (including research drilling, modern geohydrologic investigations, and development of detailed computer groundwater models that simulate the supply and demand components separately), implementation of new facilities (including spreading and injection facilities for artificial recharge), and establishment of new institutions and policies that help to sustain the water resources and better manage regional development.

  16. Lewis Wooten in the MSFC Payload Operations Integration facility.

    NASA Image and Video Library

    2015-04-13

    LEWIS WOOTEN, NEW DIRECTOR OF THE MISSION OPERATIONS LABORATORY AT NASA'S MARSHALL SPACE FLIGHT CENTER IN HUNTSVILLE, ALABAMA, MANAGES OPERATIONS IN THE PAYLOAD OPERATIONS INTEGRATION CENTER-THE COMMAND POST FOR ALL SCIENCE AND RESEARCH ACTIVITIES ON THE INTERNATIONAL SPACE STATION

  17. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  18. Integrated Geo Hazard Management System in Cloud Computing Technology

    NASA Astrophysics Data System (ADS)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  19. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities

    PubMed Central

    2010-01-01

    Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787

  20. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities.

    PubMed

    Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E

    2010-05-18

    Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  1. Faster response time : effective use of resources : integrating transportation systems and emergency management systems

    DOT National Transportation Integrated Search

    1999-01-01

    This brochure discusses how coordinating the efforts of emergency dispatchers with transportation management agencies can improve efficiency and response times. It is noted that when emergency services agencies share facilities and traffic monitoring...

  2. Criteria for Solid Waste Disposal Facilities: A Guide for Owners/Operators

    EPA Pesticide Factsheets

    EPA's continuing mission to establish the minimum national standards for landfill design, operation, and management that will enhance landfill safety and boost public confidence in landfills as a component of a workable integrated waste management system.

  3. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  4. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less

  5. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Roser, Robert; Gerber, Richard

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less

  6. Modular space station, phase B extension. Information management advanced development. Volume 5: Software assembly

    NASA Technical Reports Server (NTRS)

    Gerber, C. R.

    1972-01-01

    The development of uniform computer program standards and conventions for the modular space station is discussed. The accomplishments analyzed are: (1) development of computer program specification hierarchy, (2) definition of computer program development plan, and (3) recommendations for utilization of all operating on-board space station related data processing facilities.

  7. Service management at CERN with Service-Now

    NASA Astrophysics Data System (ADS)

    Toteva, Z.; Alvarez Alonso, R.; Alvarez Granda, E.; Cheimariou, M.-E.; Fedorko, I.; Hefferman, J.; Lemaitre, S.; Clavo, D. Martin; Martinez Pedreira, P.; Pera Mira, O.

    2012-12-01

    The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal - to bring the services closer to the end user based on Information Technology Infrastructure Library (ITIL) best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfilment processes which are based on a unique two-dimensional service catalogue that combines both the user and the support team views of all services. After an extensive evaluation of the available industrial solutions, Service-now was selected as the tool to implement the CERN Service-Management processes. The initial release of the tool provided an attractive web portal for the users and successfully implemented two basic ITIL processes; the incident management and the request fulfilment processes. It also integrated with the CERN personnel databases and the LHC GRID ticketing system. Subsequent releases continued to integrate with other third-party tools like the facility management systems of CERN as well as to implement new processes such as change management. Independently from those new development activities it was decided to simplify the request fulfilment process in order to achieve easier acceptance by the CERN user community. We believe that due to the high modularity of the Service-now tool, the parallel design of ITIL processes e.g., event management and non-ITIL processes, e.g., computer centre hardware management, will be easily achieved. This presentation will describe the experience that we have acquired and the techniques that were followed to achieve the CERN customization of the Service-Now tool.

  8. Attitude Towards Computers and Classroom Management of Language School Teachers

    ERIC Educational Resources Information Center

    Jalali, Sara; Panahzade, Vahid; Firouzmand, Ali

    2014-01-01

    Computer-assisted language learning (CALL) is the realization of computers in schools and universities which has potentially enhanced the language learning experience inside the classrooms. The integration of the technologies into the classroom demands that the teachers adopt a number of classroom management procedures to maintain a more…

  9. Senator Doug Jones (D-AL) Tour of MSFC Facilities

    NASA Image and Video Library

    2018-02-22

    Senator Doug Jones (D-AL.) and wife, Louise, tour Marshall Space Flight facilities. Steve Doering, manager, Stages Element, Space Launch System (SLS) program at MSFC, also tour the Payload Operations Integration Center (POIC) where Marshall controllers oversee stowage requirements aboard the International Space Station (ISS) as well as scientific experiments.

  10. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  11. Handheld Computers in the Classroom: Integration Strategies for Social Studies Educators.

    ERIC Educational Resources Information Center

    Ray, Beverly

    Handheld computers have gone beyond the world of business and are now finding their way into the hands of social studies teachers and students. This paper discusses how social studies teachers can use handheld computers to aid anytime/ anywhere course management. The integration of handheld technology into the classroom provides social studies…

  12. The DYNES Instrument: A Description and Overview

    NASA Astrophysics Data System (ADS)

    Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi

    2012-12-01

    Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.

  13. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  14. Operation of the 25kW NASA Lewis Research Center Solar Regenerative Fuel Cell Tested Facility

    NASA Technical Reports Server (NTRS)

    Moore, S. H.; Voecks, G. E.

    1997-01-01

    Assembly of the NASA Lewis Research Center(LeRC)Solar Regenerative Fuel Cell (RFC) Testbed Facility has been completed and system testing has proceeded. This facility includes the integration of two 25kW photovoltaic solar cell arrays, a 25kW proton exchange membrane (PEM) electrolysis unit, four 5kW PEM fuel cells, high pressure hydrogen and oxygen storage vessels, high purity water storage containers, and computer monitoring, control and data acquisition.

  15. Preparing No-Migration Demonstrations for Municipal Solid Waste Disposal Facilities: A Screening Tool

    EPA Pesticide Factsheets

    EPA's mission to establish the minimum national standards for landfill design, operation, and management that will enhance landfill safety and boost public confidence in landfills as a component of a workable integrated waste management system.

  16. Resilient workflows for computational mechanics platforms

    NASA Astrophysics Data System (ADS)

    Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine

    2010-06-01

    Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].

  17. Sustainability of the integrated chronic disease management model at primary care clinics in South Africa

    PubMed Central

    Asmall, Shaidah

    2016-01-01

    Background An integrated chronic disease management (ICDM) model consisting of four components (facility reorganisation, clinical supportive management, assisted self-supportive management and strengthening of support systems and structures outside the facility) has been implemented across 42 primary health care clinics in South Africa with a view to improve the operational efficiency and patient clinical outcomes. Aim The aim of this study was to assess the sustainability of the facility reorganisation and clinical support components 18 months after the initiation. Setting The study was conducted at 37 of the initiating clinics across three districts in three provinces of South Africa. Methods The National Health Service (NHS) Institute for Innovation and Improvement Sustainability Model (SM) self-assessment tool was used to assess sustainability. Results Bushbuckridge had the highest mean sustainability score of 71.79 (95% CI: 63.70–79.89) followed by West Rand Health District (70.25 (95% CI: 63.96–76.53)) and Dr Kenneth Kaunda District (66.50 (95% CI: 55.17–77.83)). Four facilities (11%) had an overall sustainability score of less than 55. Conclusion The less than optimal involvement of clinical leadership (doctors), negative staff behaviour towards the ICDM, adaptability or flexibility of the model to adapt to external factors and infrastructure limitation have the potential to negatively affect the sustainability and scale-up of the model. PMID:28155314

  18. Recent Accomplishments and Future Directions in US Fusion Safety & Environmental Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David A. Petti; Brad J. Merrill; Phillip Sharpe

    2006-07-01

    The US fusion program has long recognized that the safety and environmental (S&E) potential of fusion can be attained by prudent materials selection, judicious design choices, and integration of safety requirements into the design of the facility. To achieve this goal, S&E research is focused on understanding the behavior of the largest sources of radioactive and hazardous materials in a fusion facility, understanding how energy sources in a fusion facility could mobilize those materials, developing integrated state of the art S&E computer codes and risk tools for safety assessment, and evaluating S&E issues associated with current fusion designs. In thismore » paper, recent accomplishments are reviewed and future directions outlined.« less

  19. LaRC design analysis report for National Transonic Facility for 304 stainless steel tunnel shell. Volume 1S: Finite difference analysis of cone/cylinder junction

    NASA Technical Reports Server (NTRS)

    Ramsey, J. W., Jr.; Taylor, J. T.; Wilson, J. F.; Gray, C. E., Jr.; Leatherman, A. D.; Rooker, J. R.; Allred, J. W.

    1976-01-01

    The results of extensive computer (finite element, finite difference and numerical integration), thermal, fatigue, and special analyses of critical portions of a large pressurized, cryogenic wind tunnel (National Transonic Facility) are presented. The computer models, loading and boundary conditions are described. Graphic capability was used to display model geometry, section properties, and stress results. A stress criteria is presented for evaluation of the results of the analyses. Thermal analyses were performed for major critical and typical areas. Fatigue analyses of the entire tunnel circuit are presented.

  20. National remote computational flight research facility

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.

    1989-01-01

    The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.

  1. Integration Process for Payloads in the Fluids and Combustion Facility

    NASA Technical Reports Server (NTRS)

    Free, James M.; Nall, Marsha M.

    2001-01-01

    The Fluids and Combustion Facility (FCF) is an ISS research facility located in the United States Laboratory (US Lab), Destiny. The FCF is a multi-discipline facility that performs microgravity research primarily in fluids physics science and combustion science. This facility remains on-orbit and provides accommodations to multi-user and Principal investigator (PI) unique hardware. The FCF is designed to accommodate 15 PI's per year. In order to allow for this number of payloads per year, the FCF has developed an end-to-end analytical and physical integration process. The process includes provision of integration tools, products and interface management throughout the life of the payload. The payload is provided with a single point of contact from the facility and works with that interface from PI selection through post flight processing. The process utilizes electronic tools for creation of interface documents/agreements, storage of payload data and rollup for facility submittals to ISS. Additionally, the process provides integration to and testing with flight-like simulators prior to payload delivery to KSC. These simulators allow the payload to test in the flight configuration and perform final facility interface and science verifications. The process also provides for support to the payload from the FCF through the Payload Safety Review Panel (PSRP). Finally, the process includes support in the development of operational products and the operation of the payload on-orbit.

  2. Fluids and Combustion Facility: Fluids Integrated Rack Modal Model Correlation

    NASA Technical Reports Server (NTRS)

    McNelis, Mark E.; Suarez, Vicente J.; Sullivan, Timothy L.; Otten, Kim D.; Akers, James C.

    2005-01-01

    The Fluids Integrated Rack (FIR) is one of two racks in the Fluids and Combustion Facility on the International Space Station. The FIR is dedicated to the scientific investigation of space system fluids management supporting NASA s Exploration of Space Initiative. The FIR hardware was modal tested and FIR finite element model updated to satisfy the International Space Station model correlation criteria. The final cross-orthogonality results between the correlated model and test mode shapes was greater than 90 percent for all primary target modes.

  3. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  4. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  5. Flat-plate solar array project. Volume 8: Project analysis and integration

    NASA Technical Reports Server (NTRS)

    Mcguire, P.; Henry, P.

    1986-01-01

    Project Analysis and Integration (PA&I) performed planning and integration activities to support management of the various Flat-Plate Solar Array (FSA) Project R&D activities. Technical and economic goals were established by PA&I for each R&D task within the project to coordinate the thrust toward the National Photovoltaic Program goals. A sophisticated computer modeling capability was developed to assess technical progress toward meeting the economic goals. These models included a manufacturing facility simulation, a photovoltaic power station simulation and a decision aid model incorporating uncertainty. This family of analysis tools was used to track the progress of the technology and to explore the effects of alternative technical paths. Numerous studies conducted by PA&I signaled the achievement of milestones or were the foundation of major FSA project and national program decisions. The most important PA&I activities during the project history are summarized. The PA&I planning function is discussed and how it relates to project direction and important analytical models developed by PA&I for its analytical and assessment activities are reviewed.

  6. Computer assisted operations in Petroleum Development Oman (PDO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Hinai, S.H.; Mutimer, K.

    1995-10-01

    Petroleum Development Oman (PDO) currently produces some 750,000 bopd and 900,000 bwpd from some 74 fields in a large geographical area and diverse operating conditions. A key corporate objective is to reduce operating costs by exploiting productivity gains from proven technology. Automation is seen as a means of managing the rapid growth of well population and production facilities. the overall objective is to improve field management through continuous monitoring of wells and facilities and dissemination of data throughout the whole organization. A major upgrade of PDO`s field Supervisory Control and Data Acquisition (SCADA) system is complete providing a platform tomore » exploit new initiatives particularly for production optimization of artificial lift systems and automatic well testing using multi selector valves, coriolis flow meter measurements and multi component (oil, gas, water) flowmeter. The paper describes PDO`s experience including benefits and challenges which have to be managed when developing Computer Assisted Operations (CAO).« less

  7. Improving antimicrobial use among health workers in first-level facilities: results from the multi-country evaluation of the Integrated Management of Childhood Illness strategy.

    PubMed Central

    Gouws, Eleanor; Bryce, Jennifer; Habicht, Jean-Pierre; Amaral, João; Pariyo, George; Schellenberg, Joanna Armstrong; Fontaine, Olivier

    2004-01-01

    OBJECTIVE: The objective of this study was to assess the effect of Integrated Management of Childhood Illness (IMCI) case management training on the use of antimicrobial drugs among health-care workers treating young children at first-level facilities. Antimicrobial drugs are an essential child-survival intervention. Ensuring that children younger than five who need these drugs receive them promptly and correctly can save their lives. Prescribing these drugs only when necessary and ensuring that those who receive them complete the full course can slow the development of antimicrobial resistance. METHODS: Data collected through observation-based surveys in randomly selected first-level health facilities in Brazil, Uganda and the United Republic of Tanzania were statistically analysed. The surveys were carried out as part of the multi-country evaluation of IMCI effectiveness, cost and impact (MCE). FINDINGS: Results from three MCE sites show that children receiving care from health workers trained in IMCI are significantly more likely to receive correct prescriptions for antimicrobial drugs than those receiving care from workers not trained in IMCI.They are also more likely to receive the first dose of the drug before leaving the health facility, to have their caregiver advised how to administer the drug, and to have caregivers who are able to describe correctly how to give the drug at home as they leave the health facility. CONCLUSIONS: IMCI case management training is an effective intervention to improve the rational use of antimicrobial drugs for sick children visiting first-level health facilities in low-income and middle-income countries. PMID:15508195

  8. High-Performance Computing Data Center Power Usage Effectiveness |

    Science.gov Websites

    Power Usage Effectiveness When the Energy Systems Integration Facility (ESIF) was conceived, NREL set an , ventilation, and air conditioning (HVAC), which captures fan walls, fan coils that support the data center

  9. Development of a model forecasting Dermanyssus gallinae's population dynamics for advancing Integrated Pest Management in laying hen facilities.

    PubMed

    Mul, Monique F; van Riel, Johan W; Roy, Lise; Zoons, Johan; André, Geert; George, David R; Meerburg, Bastiaan G; Dicke, Marcel; van Mourik, Simon; Groot Koerkamp, Peter W G

    2017-10-15

    The poultry red mite, Dermanyssus gallinae, is the most significant pest of egg laying hens in many parts of the world. Control of D. gallinae could be greatly improved with advanced Integrated Pest Management (IPM) for D. gallinae in laying hen facilities. The development of a model forecasting the pests' population dynamics in laying hen facilities without and post-treatment will contribute to this advanced IPM and could consequently improve implementation of IPM by farmers. The current work describes the development and demonstration of a model which can follow and forecast the population dynamics of D. gallinae in laying hen facilities given the variation of the population growth of D. gallinae within and between flocks. This high variation could partly be explained by house temperature, flock age, treatment, and hen house. The total population growth variation within and between flocks, however, was in part explained by temporal variation. For a substantial part this variation was unexplained. A dynamic adaptive model (DAP) was consequently developed, as models of this type are able to handle such temporal variations. The developed DAP model can forecast the population dynamics of D. gallinae, requiring only current flock population monitoring data, temperature data and information of the dates of any D. gallinae treatment. Importantly, the DAP model forecasted treatment effects, while compensating for location and time specific interactions, handling the variability of these parameters. The characteristics of this DAP model, and its compatibility with different mite monitoring methods, represent progression from existing approaches for forecasting D. gallinae that could contribute to advancing improved Integrated Pest Management (IPM) for D. gallinae in laying hen facilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. MANTECH project book

    NASA Astrophysics Data System (ADS)

    The effective integration of processes, systems, and procedures used in the production of aerospace systems using computer technology is managed by the Integration Technology Division (MTI). Under its auspices are the Information Management Branch, which is actively involved with information management, information sciences and integration, and the Implementation Branch, whose technology areas include computer integrated manufacturing, engineering design, operations research, and material handling and assembly. The Integration Technology Division combines design, manufacturing, and supportability functions within the same organization. The Processing and Fabrication Division manages programs to improve structural and nonstructural materials processing and fabrication. Within this division, the Metals Branch directs the manufacturing methods program for metals and metal matrix composites processing and fabrication. The Nonmetals Branch directs the manufacturing methods programs, which include all manufacturing processes for producing and utilizing propellants, plastics, resins, fibers, composites, fluid elastomers, ceramics, glasses, and coatings. The objective of the Industrial Base Analysis Division is to act as focal point for the USAF industrial base program for productivity, responsiveness, and preparedness planning.

  11. Integrated management of thesis using clustering method

    NASA Astrophysics Data System (ADS)

    Astuti, Indah Fitri; Cahyadi, Dedy

    2017-02-01

    Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.

  12. HOLISTIC APPROACH TO ENVIRONMENTAL MANAGEMENT OF MUNICIPAL SOLID WASTE

    EPA Science Inventory

    The paper presents results from the application of a new municipal solid waste (MSW) management planning aid to EPA's new facility in the Research Triangle Park, NC. This planning aid, or decision support tool, is computer software that analyzes the cost and environmental impact ...

  13. The 10 MWe solar thermal central receiver pilot plant solar facilities design integration, RADL item 1-10

    NASA Astrophysics Data System (ADS)

    1980-08-01

    Work on the plant support subsystems and engineering services is reported. The master control system, thermal storage subsystem, receiver unit, and the beam characterization system were reviewed. Progress in program management and system integration is highlighted.

  14. POLLUTION PREVENTION OPPORTUNITY ASSESSMENT - U.S. POSTAL SERVICE OPERATIONS, MERRIFIELD, VIRGINIA

    EPA Science Inventory

    The United States Postal Service (USPS) in cooperation with EPA’s National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate waste prevention and recycling activities into the waste management programs at Postal facilities. This report describ...

  15. Ambient air monitoring of Beijing MSW logistics facilities in 2006.

    PubMed

    Li, Chun-Ping; Li, Guo-Xue; Luo, Yi-Ming; Li, Yan-Fu

    2008-11-01

    In China, "green" integrated waste management methods are being implemented in response to environmental concerns. We measured the air quality at several municipal solid waste (MSW) sites to provide information for the incorporation of logistics facilities within the current integrated waste management system. We monitored ambient air quality at eight MSW collecting stations, five transfer stations, one composting plant, and five disposal sites in Beijing during April 2006. Composite air samples were collected and analyzed for levels of odor, ammonia (NH3), hydrogen sulfide (H2S), total suspended particles (TSPs), carbon monoxide (CO), sulfur dioxide (SO2), and nitrogen dioxide (NO2). The results of our atmospheric monitoring demonstrated that although CO and SO2 were within acceptable emission levels according to ambient standards, levels of H2S, TSP, and NO2 in the ambient air at most MSW logistics facilities far exceeded ambient limits established for China. The primary pollutants in the ambient air at Beijing MSW logistics facilities were H2S, TSPs, NO2, and odor. To improve current environmental conditions at MSW logistics facilities, the Chinese government encourages the separation of biogenic waste from MSW at the source.

  16. Associations Among Health Care Workplace Safety, Resident Satisfaction, and Quality of Care in Long-Term Care Facilities.

    PubMed

    Boakye-Dankwa, Ernest; Teeple, Erin; Gore, Rebecca; Punnett, Laura

    2017-11-01

    We performed an integrated cross-sectional analysis of relationships between long-term care work environments, employee and resident satisfaction, and quality of patient care. Facility-level data came from a network of 203 skilled nursing facilities in 13 states in the eastern United States owned or managed by one company. K-means cluster analysis was applied to investigate clustered associations between safe resident handling program (SRHP) performance, resident care outcomes, employee satisfaction, rates of workers' compensation claims, and resident satisfaction. Facilities in the better-performing cluster were found to have better patient care outcomes and resident satisfaction; lower rates of workers compensation claims; better SRHP performance; higher employee retention; and greater worker job satisfaction and engagement. The observed clustered relationships support the utility of integrated performance assessment in long-term care facilities.

  17. Operator Finds Control at His Fingertips.

    ERIC Educational Resources Information Center

    Goscicki, Edward

    1979-01-01

    Discussed are the advantages associated with the use of computer systems in wastewater treatment facilities. The system parallels plant organization and considers operations, maintenance, and plant management. (CS)

  18. Final-Approach-Spacing Subsystem For Air Traffic

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Erzberger, Heinz; Bergeron, Hugh

    1992-01-01

    Automation subsystem of computers, computer workstations, communication equipment, and radar helps air-traffic controllers in terminal radar approach-control (TRACON) facility manage sequence and spacing of arriving aircraft for both efficiency and safety. Called FAST (Final Approach Spacing Tool), subsystem enables controllers to choose among various levels of automation.

  19. TQM in a Computer Lab.

    ERIC Educational Resources Information Center

    Swanson, Dewey A.; Phillips, Julie A.

    At the Purdue University School of Technology (PST) at Columbus, Indiana, the Total Quality Management (TQM) philosophy was used in the computer laboratories to better meet student needs. A customer satisfaction survey was conducted to gather data on lab facilities, lab assistants, and hardware/software; other sections of the survey included…

  20. Challenges and opportunities of integration of community based Management of Acute Malnutrition into the government health system in Bangladesh: a qualitative study.

    PubMed

    Ireen, Santhia; Raihan, Mohammad Jyoti; Choudhury, Nuzhat; Islam, M Munirul; Hossain, Md Iqbal; Islam, Ziaul; Rahman, S M Mustafizur; Ahmed, Tahmeed

    2018-04-10

    Severe acute malnutrition (SAM) in children is the most serious form of malnutrition and is associated with very high rates of morbidity and mortality. For sustainable SAM management, United Nations recommends integration of community based management of acute malnutrition (CMAM) into the health system. The objective of the study was to assess the preparedness of the health system to implement CMAM in Bangladesh. The assessment was undertaken during January to May 2014 by conducting document review, key informant interviews, and direct observation. A total of 38 key informant interviews were conducted among government policy makers and program managers (n = 4), nutrition experts (n = 2), health and nutrition implementing partners (n = 2), development partner (n = 1), government health system staff (n = 5), government front line field workers (n = 22), and community members (n = 2). The assessment was based on: workforce, service delivery, financing, governance, information system, medical supplies, and the broad socio-political context. The government of Bangladesh has developed inpatient and outpatient guidelines for the management of SAM. There are cadres of community health workers of government and non-government actors who can be adequately trained to conduct CMAM. Inpatient management of SAM is available in 288 facilities across the country. However, only 2.7% doctors and 3.3% auxiliary staff are trained on facility based management of SAM. In functional facilities, uninterrupted supply of medicines and therapeutic diet are not available. There is resistance and disagreement among nutrition stakeholders regarding import or local production of ready-to-use therapeutic food (RUTF). Nutrition coordination is fragile and there is no functional supra-ministerial coordination platform for multi-sectoral and multi-stakeholder nutrition. There is an enabling environment for CMAM intervention in Bangladesh although health system strengthening is needed considering the barriers that have been identified. Training of facility based health staff, government community workers, and ensuring uninterrupted supply of medicines and logistics to the functional facilities should be the immediate priorities. Availability of ready-to-use therapeutic food (RUTF) is a critical component of CMAM and government should promote in-country production of RUTF for effective integration of CMAM into the health system in Bangladesh.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Currie, Bob; Miller, Jeremiah; Anderson, Art

    Smarter Grid Solutions used the National Renewable Energy Laboratory’s (NREL’s) simulation capabilities at the Energy Systems Integration Facility to expand its Active Network Management technology for smart campus power control.

  2. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  3. Object migration and authentication. [in computer operating systems design

    NASA Technical Reports Server (NTRS)

    Gligor, V. D.; Lindsay, B. G.

    1979-01-01

    The paper presents a mechanism permitting a type manager to fabricate a migrated object representation which can be entrusted to other subsystems or transmitted outside of the control of a local computer system. The migrated object representation is signed by the type manager in such a way that the type manager's signature cannot be forged and the manager is able to authenticate its own signature. Subsequently, the type manager can retrieve the migrated representation and validate its contents before reconstructing the object in its original representation. This facility allows type managers to authenticate the contents of off-line or network storage and solves problems stemming from the hierarchical structure of the system itself.

  4. IPAD 2: Advances in Distributed Data Base Management for CAD/CAM

    NASA Technical Reports Server (NTRS)

    Bostic, S. W. (Compiler)

    1984-01-01

    The Integrated Programs for Aerospace-Vehicle Design (IPAD) Project objective is to improve engineering productivity through better use of computer-aided design and manufacturing (CAD/CAM) technology. The focus is on development of technology and associated software for integrated company-wide management of engineering information. The objectives of this conference are as follows: to provide a greater awareness of the critical need by U.S. industry for advancements in distributed CAD/CAM data management capability; to present industry experiences and current and planned research in distributed data base management; and to summarize IPAD data management contributions and their impact on U.S. industry and computer hardware and software vendors.

  5. Nasreya: a treatment and disposal facility for industrial hazardous waste in Alexandria, Egypt: phase I.

    PubMed

    Ramadan, Adham R; Kock, Per; Nadim, Amani

    2005-04-01

    A facility for the treatment and disposal of industrial hazardous waste has been established in Alexandria, Egypt. Phase I of the facility encompassing a secure landfill and solar evaporation ponds is ready to receive waste, and Phase II encompassing physico-chemical treatment, solidification, and interim storage is underway. The facility, the Nasreya Centre, is the first of its kind in Egypt, and represents the nucleus for the integration, improvement and further expansion of different hazardous waste management practices and services in Alexandria. It has been developed within the overall legal framework of the Egyptian Law for the Environment, and is expected to improve prospects for enforcement of the regulatory requirements specified in this law. It has been developed with the overall aim of promoting the establishment of an integrated industrial hazardous waste management system in Alexandria, serving as a demonstration to be replicated elsewhere in Egypt. For Phase I, the Centre only accepts inorganic industrial wastes. In this respect, a waste acceptance policy has been developed, which is expected to be reviewed during Phase II, with an expansion of the waste types accepted.

  6. Estimation of water withdrawal and distribution, water use, and wastewater collection and return flow in Cumberland, Rhode Island, 1988

    USGS Publications Warehouse

    Horn, M.A.; Craft, P.A.; Bratton, Lisa

    1994-01-01

    Water-use data collected in Rhode Island by different State agencies or maintained by different public suppliers and wastewater- treatment facilities need to be integrated if these data are to be used in making water- resource management decisions. Water-use data for the town of Cumberland, a small area in northeastern Rhode Island, were compiled and integrated to provide an example of how the procedure could be applied. Integration and reliability assessment of water-use data could be facilitated if public suppliers, wastewater- treatment facilities, and State agencies used a number of standardized procedures for data collection and computer storage. The total surface water and ground water withdrawn in the town of Cumberland during 1988 is estimated to be 15.39 million gallons per day, of which 11.20 million gallons per day was exported to other towns. Water use in Cumberland included 2.51 million gallons per day for domestic use, 0.68 million gallons per day for industrial use, 0.27 million gallons per day for commercial use, and 0.73 million gallons per day for other use, most of which were unmetered use. Disposal of waste- water in Cumberland included 2.03 million gallons per day returned to the hydrologic system and 1.73 million gallons per day exported from Cumberland for wastewater treatment. Consumptive use during 1988 is estimated to be 0.43 million gallons per day.

  7. Analysis of Department of Defense Organic Depot Maintenance Capacity Management and Facility Utilization Factors

    DTIC Science & Technology

    1991-09-01

    System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical

  8. Enterprise-wide worklist management.

    PubMed

    Locko, Roberta C; Blume, Hartwig; Goble, John C

    2002-01-01

    Radiologists in multi-facility health care delivery networks must serve not only their own departments but also departments of associated clinical facilities. We describe our experience with a picture archiving and communication system (PACS) implementation that provides a dynamic view of relevant radiological workload across multiple facilities. We implemented a distributed query system that permits management of enterprise worklists based on modality, body part, exam status, and other criteria that span multiple compatible PACSs. Dynamic worklists, with lesser flexibility, can be constructed if the incompatible PACSs support specific DICOM functionality. Enterprise-wide worklists were implemented across Generations Plus/Northern Manhattan Health Network, linking radiology departments of three hospitals (Harlem, Lincoln, and Metropolitan) with 1465 beds and 4260 ambulatory patients per day. Enterprise-wide, dynamic worklist management improves utilization of radiologists and enhances the quality of care across large multi-facility health care delivery organizations. Integration of other workflow-related components remain a significant challenge.

  9. Computer-aided dispatch--traffic management center field operational test : Washington State final report

    DOT National Transportation Integrated Search

    2006-05-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...

  10. GIS Facility and Services at the Ronald Greeley Center for Planetary Studies

    NASA Astrophysics Data System (ADS)

    Nelson, D. M.; Williams, D. A.

    2017-06-01

    At the RGCPS, we established a Geographic Information Systems (GIS) computer laboratory, where we instruct researchers how to use GIS and image processing software. Seminars demonstrate viewing, integrating, and digitally mapping planetary data.

  11. Assessing healthcare market trends and capital needs: 1996-2000.

    PubMed

    Coile, R C

    1995-08-01

    An analysis of recent data suggests several significant trends for the next five years, including a continuation of market-based reform, increases in managed care penetration, growth of Medicare and Medicaid health maintenance organizations, and erosion of hospital profits. A common response to these trends is to create integrated delivery systems, which can require significant capital investment. The wisest capital investment strategy may be to avoid asset-based integration in favor of "virtual integration," which emphasizes coordination through patient-management agreements, provider incentives, and information systems, rather than investment in large number of facilities.

  12. Development of automated electromagnetic compatibility test facilities at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Harrison, Cecil A.

    1986-01-01

    The efforts to automate the electromagentic compatibility (EMC) test facilites at Marshall Flight Center were examined. A battery of nine standard tests is to be integrated by means of a desktop computer-controller in order to provide near real-time data assessment, store the data acquired during testing on flexible disk, and provide computer production of the certification report.

  13. Bug Off

    ERIC Educational Resources Information Center

    Copps, Patrick T.

    2007-01-01

    Insects and rodents in education facilities can cause structural damage, and they carry diseases that threaten food safety and the health of students and employees. To effectively prevent infestations and manage pests in a safe and eco-sensitive manner, many schools turn to integrated pest management (IPM) programs that emphasize environmentally…

  14. Bethany Sparn | NREL

    Science.gov Websites

    Sparn Photo of Bethany Sparn Bethany Sparn Researcher IV-Systems Engineering Bethany.Sparn@nrel.gov , residential HVAC equipment, heat pump water heaters, automated home energy management devices, and whole-house Energy Systems Integration Facility which provides a test bed for evaluating home energy management

  15. POLLUTION PREVENTION OPPORTUNITY ASSESSMENT - U.S. POSTAL SERVICE POST OFFICES, PITTSBURGH, PA AREA

    EPA Science Inventory

    The United States Postal Service (USPS) in cooperation with EPA’s National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate waste prevention and recycling activities into the waste management programs at Postal facilities. This report describ...

  16. POLLUTION PREVENTION OPPORTUNITY ASSESSMENT - U.S. POSTAL SERVICE BULK MAIL CENTER, DALLAS, TEXAS

    EPA Science Inventory

    The United States Postal Service (USPS) in cooperation with EPA’s National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate waste prevention and recycling activities into the waste management programs at Postal facilities. This report describ...

  17. Management of CAD/CAM information: Key to improved manufacturing productivity

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Brainin, J.

    1984-01-01

    A key element to improved industry productivity is effective management of CAD/CAM information. To stimulate advancements in this area, a joint NASA/Navy/Industry project designated Integrated Programs for Aerospace-Vehicle Design (IPAD) is underway with the goal of raising aerospace industry productivity through advancement of technology to integrate and manage information involved in the design and manufacturing process. The project complements traditional NASA/DOD research to develop aerospace design technology and the Air Force's Integrated Computer-Aided Manufacturing (ICAM) program to advance CAM technology. IPAD research is guided by an Industry Technical Advisory Board (ITAB) composed of over 100 repesentatives from aerospace and computer companies. The IPAD accomplishments to date in development of requirements and prototype software for various levels of company-wide CAD/CAM data management are summarized and plans for development of technology for management of distributed CAD/CAM data and information required to control future knowledge-based CAD/CAM systems are discussed.

  18. Guidelines for developing NASA (National Aeronautics and Space Administration) ADP security risk management plans

    NASA Technical Reports Server (NTRS)

    Tompkins, F. G.

    1983-01-01

    This report presents guidance to NASA Computer security officials for developing ADP security risk management plans. The six components of the risk management process are identified and discussed. Guidance is presented on how to manage security risks that have been identified during a risk analysis performed at a data processing facility or during the security evaluation of an application system.

  19. Integrated Component-based Data Acquisition Systems for Aerospace Test Facilities

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.

    2001-01-01

    The Multi-Instrument Integrated Data Acquisition System (MIIDAS), developed by the NASA Langley Research Center, uses commercial off the shelf (COTS) products, integrated with custom software, to provide a broad range of capabilities at a low cost throughout the system s entire life cycle. MIIDAS combines data acquisition capabilities with online and post-test data reduction computations. COTS products lower purchase and maintenance costs by reducing the level of effort required to meet system requirements. Object-oriented methods are used to enhance modularity, encourage reusability, and to promote adaptability, reducing software development costs. Using only COTS products and custom software supported on multiple platforms reduces the cost of porting the system to other platforms. The post-test data reduction capabilities of MIIDAS have been installed at four aerospace testing facilities at NASA Langley Research Center. The systems installed at these facilities provide a common user interface, reducing the training time required for personnel that work across multiple facilities. The techniques employed by MIIDAS enable NASA to build a system with a lower initial purchase price and reduced sustaining maintenance costs. With MIIDAS, NASA has built a highly flexible next generation data acquisition and reduction system for aerospace test facilities that meets customer expectations.

  20. Integrated approach to modeling long-term durability of concrete engineered barriers in LLRW disposal facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J.H.; Roy, D.M.; Mann, B.

    1995-12-31

    This paper describes an integrated approach to developing a predictive computer model for long-term performance of concrete engineered barriers utilized in LLRW and ILRW disposal facilities. The model development concept consists of three major modeling schemes: hydration modeling of the binder phase, pore solution speciation, and transport modeling in the concrete barrier and service environment. Although still in its inception, the model development approach demonstrated that the chemical and physical properties of complex cementitious materials and their interactions with service environments can be described quantitatively. Applying the integrated model development approach to modeling alkali (Na and K) leaching from amore » concrete pad barrier in an above-grade tumulus disposal unit, it is predicted that, in a near-surface land disposal facility where water infiltration through the facility is normally minimal, the alkalis control the pore solution pH of the concrete barriers for much longer than most previous concrete barrier degradation studies assumed. The results also imply that a highly alkaline condition created by the alkali leaching will result in alteration of the soil mineralogy in the vicinity of the disposal facility.« less

  1. Priority setting and the ethics of resource allocation within VA healthcare facilities: results of a survey.

    PubMed

    Foglia, Mary Beth; Pearlman, Robert A; Bottrell, Melissa M; Altemose, Jane A; Fox, Ellen

    2008-01-01

    Setting priorities and the subsequent allocation of resources is a major ethical issue facing healthcare facilities, including the Veterans Health Administration (VHA), the largest integrated healthcare delivery network in the United States. Yet despite the importance of priority setting and its impact on those who receive and those who provide care, we know relatively little about how clinicians and managers view allocation processes within their facilities. The purpose of this secondary analysis of survey data was to characterize staff members' perceptions regarding the fairness of healthcare ethics practices related to resource allocation in Veterans Administration (VA) facilities. The specific aim of the study was to compare the responses of clinicians, clinician managers, and non-clinician managers with respect to these survey items. We utilized a paper and web-based survey and a cross-sectional design of VHA clinicians and managers. Our sample consisted of a purposive stratified sample of 109 managers and a stratified random sample of 269 clinicians employed 20 or more hours per week in one of four VA medical centers. The four medical centers were participating as field sites selected to test the logistics of administering and reporting results of the Integrated Ethics Staff Survey, an assessment tool aimed at characterizing a broad range of ethical practices within a healthcare organization. In general, clinicians were more critical than clinician managers or non-clinician managers of the institutions' allocation processes and of the impact of resource decisions on patient care. Clinicians commonly reported that they did not (a) understand their facility's decision-making processes, (b) receive explanations from management regarding the reasons behind important allocation decisions, or (b) perceive that they were influential in allocation decisions. In addition, clinicians and managers both perceived that education related to the ethics of resource allocation was insufficient and that their facilities could increase their effectiveness in identifying and resolving ethical problems related to resource allocation. How well a healthcare facility ensures fairness in the way it allocates its resources across programs and services depends on multiple factors, including awareness by decision makers that setting priorities and allocating resources is a moral enterprise (moral awareness), the availability of a consistent process that includes important stakeholder groups (procedural justice), and concurrence by stakeholders that decisions represent outcomes that fairly balance competing interests and have a positive net effect on the quality of care (distributive justice). In this study, clinicians and managers alike identified the need for improvement in healthcare ethics practices related to resource allocation.

  2. IPAD project overview

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.

    1980-01-01

    To respond to national needs for improved productivity in engineering design and manufacturing, a NASA supported joint industry/government project is underway denoted Integrated Programs for Aerospace-Vehicle Design (IPAD). The objective is to improve engineering productivity through better use of computer technology. It focuses on development of technology and associated software for integrated company-wide management of engineering information. The project has been underway since 1976 under the guidance of an Industry Technical Advisory Board (ITAB) composed of representatives of major engineering and computer companies and in close collaboration with the Air Force Integrated Computer-Aided Manufacturing (ICAM) program. Results to date on the IPAD project include an in-depth documentation of a representative design process for a large engineering project, the definition and design of computer-aided design software needed to support that process, and the release of prototype software to integrate selected design functions. Ongoing work concentrates on development of prototype software to manage engineering information, and initial software is nearing release.

  3. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  4. Computer-aided dispatch--traffic management center field operational test : state of Utah final report

    DOT National Transportation Integrated Search

    2006-07-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...

  5. Kevin Regimbal | NREL

    Science.gov Websites

    -275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center

  6. 26 CFR 1.141-5 - Private loan financing test.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... lease or other contractual arrangement (for example, a management contract or an output contract) may in... person. Similarly, an output contract or a management contract with respect to a financed facility... (2) Updates or maintenance or support services with respect to computer software; and (B) The same...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, K.L.

    This document has been developed to provide guidance in the interchange of electronic CAD data with Martin Marietta Energy Systems, Inc., Oak Ridge, Tennessee. It is not meant to be as comprehensive as the existing standards and specifications, but to provide a minimum set of practices that will enhance the success of the CAD data exchange. It is now a Department of Energy (DOE) Oak Ridge Field Office requirement that Architect-Engineering (A-E) firms prepare all new drawings using a Computer Aided Design (CAD) system that is compatible with the Facility Manager`s (FM) CAD system. For Oak Ridge facilities, the CADmore » system used for facility design by the FM, Martin Marietta Energy Systems, Inc., is Intregraph. The format for interchange of CAD data for Oak Ridge facilities will be the Intergraph MicroStation/IGDS format.« less

  8. POLLUTION PREVENTION OPPORTUNITY ASSESSMENT - U.S. POSTAL SERVICE MATERIALS DISTRIBUTION CENTER, TOPEKA, KANSAS

    EPA Science Inventory

    The United States Postal Service (USPS) in cooperation with EPA's National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate Waste prevention and recycling activities into the waste management programs at Postal facilities. In this report, the findi...

  9. POLLUTION PREVENTION OPPORTUNITY ASSESSMENT - U.S. POSTAL SERVICE STAMP DISTRIBUTION NETWORK, KANSAS CITY, MISSOURI

    EPA Science Inventory

    The United States Postal Service (USPS) in cooperation with EPA's National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate Waste prevention and recycling activities into the waste management programs at Postal facilities. In this report, the findi...

  10. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  11. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    DOE PAGES

    Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung; ...

    2016-11-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  12. Borehole Disposal and the Cradle-To-Grave Management Program for Radioactive Sealed Sources in Egypt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, J.R.; Carson, S.D.; El-Adham, K.

    2006-07-01

    The Integrated Management Program for Radioactive Sealed Sources (IMPRSS) is greatly improving the management of radioactive sealed sources (RSSs) in Egypt. When completed, IMPRSS will protect the people and the environment from another radioactive incident. The Government of Egypt and Sandia National Laboratories are collaboratively implementing IMPRSS. The integrated activities are divided into three broad areas: the safe management of RSSs in-use, the safe management of unwanted RSSs, and crosscutting infrastructure. Taken together, these work elements comprise a cradle-to-grave program. To ensure sustainability, the IMPRSS emphasizes such activities as human capacity development through technology transfer and training, and development ofmore » a disposal facility. As a key step in the development of a disposal facility, IMPRSS is conducting a safety assessment for intermediate-depth borehole disposal in thick arid alluvium in Egypt based on experience with the U.S.'s Greater Confinement Disposal boreholes. This safety assessment of borehole disposal is being supported by the International Atomic Energy Agency (IAEA) through an IAEA Technical Cooperation Project. (authors)« less

  13. Development of an integrated transuranic waste management system for a large research facility: NUCEF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mineo, Hideaki; Matsumura, Tatsuro; Takeshita, Isao

    1997-03-01

    The Nuclear Fuel Cycle Safety Engineering Research Facility (NUCEF) is a large complex of research facilities where transuranic (TRU) elements are used. Liquid and solid waste containing TRU elements is generated mainly in the treatment of fuel for critical experiments and in the research of reprocessing and TRU waste management in hot cells and glove boxes. The rational management of TRU wastes is a very important issue not only for NUCEF but also for Japan. An integrated TRU waste management system is being developed with NUCEF as the test bed. The basic policy for establishing the system is to classifymore » wastes by TRU concentration, to reduce waste volume, and to maximize reuse of TRU elements. The principal approach of the development program is to apply the outcomes of the research carried out in NUCEF. Key technologies are TRU measurement for classification of solid wastes and TRU separation and volume reduction for organic and aqueous wastes. Some technologies required for treating the wastes specific to the research activities in NUCEF need further development. Specifically, the separation and stabilization technologies for americium recovery from concentrated aqueous waste, which is generated in dissolution of mixed oxide when preparing fuel for critical experiments, needs further research.« less

  14. ODU-CAUSE: Computer Based Learning Lab.

    ERIC Educational Resources Information Center

    Sachon, Michael W.; Copeland, Gary E.

    This paper describes the Computer Based Learning Lab (CBLL) at Old Dominion University (ODU) as a component of the ODU-Comprehensive Assistance to Undergraduate Science Education (CAUSE) Project. Emphasis is directed to the structure and management of the facility and to the software under development by the staff. Serving the ODU-CAUSE User Group…

  15. Integrated software system for low level waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worku, G.

    1995-12-31

    In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal undermore » the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications.« less

  16. Integrated exhaust gas analysis system for aircraft turbine engine component testing

    NASA Technical Reports Server (NTRS)

    Summers, R. L.; Anderson, R. C.

    1985-01-01

    An integrated exhaust gas analysis system was designed and installed in the hot-section facility at the Lewis Research Center. The system is designed to operate either manually or automatically and also to be operated from a remote station. The system measures oxygen, water vapor, total hydrocarbons, carbon monoxide, carbon dioxide, and oxides of nitrogen. Two microprocessors control the system and the analyzers, collect data and process them into engineering units, and present the data to the facility computers and the system operator. Within the design of this system there are innovative concepts and procedures that are of general interest and application to other gas analysis tasks.

  17. NASA Lewis Wind Tunnel Model Systems Criteria

    NASA Technical Reports Server (NTRS)

    Soeder, Ronald H.; Haller, Henry C.

    1994-01-01

    This report describes criteria for the design, analysis, quality assurance, and documentation of models or test articles that are to be tested in the aeropropulsion facilities at the NASA Lewis Research Center. The report presents three methods for computing model allowable stresses on the basis of the yield stress or ultimate stress, and it gives quality assurance criteria for models tested in Lewis' aeropropulsion facilities. Both customer-furnished model systems and in-house model systems are discussed. The functions of the facility manager, project engineer, operations engineer, research engineer, and facility electrical engineer are defined. The format for pretest meetings, prerun safety meetings, and the model criteria review are outlined Then, the format for the model systems report (a requirement for each model that is to be tested at NASA Lewis) is described, the engineers that are responsible for developing the model systems report are listed, and the time table for its delivery to the facility manager is given.

  18. IPAD: A unique approach to government/industry cooperation for technology development and transfer

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.; Salley, George C.

    1985-01-01

    A key element to improved industry productivity is effective management of Computer Aided Design / Computer Aided Manufacturing (CAD/CAM) information. To stimulate advancement, a unique joint government/industry project designated Integrated Programs for Aerospace-Vehicle Design (IPAD) was carried out from 1971 to 1984. The goal was to raise aerospace industry productivity through advancement of computer based technology to integrate and manage information involved in the design and manufacturing process. IPAD research was guided by an Industry Technical Advisory Board (ITAB) composed of over 100 representatives from aerospace and computer companies. The project complemented traditional NASA/DOD research to develop aerospace design technology and the Air Force's Integrated Computer Aided Manufacturing (ICAM) program to advance CAM technology. IPAD had unprecedented industry support and involvement and served as a unique approach to government industry cooperation in the development and transfer of advanced technology. The IPAD project background, approach, accomplishments, industry involvement, technology transfer mechanisms and lessons learned are summarized.

  19. Management and development of local area network upgrade prototype

    NASA Technical Reports Server (NTRS)

    Fouser, T. J.

    1981-01-01

    Given the situation of having management and development users accessing a central computing facility and given the fact that these same users have the need for local computation and storage, the utilization of a commercially available networking system such as CP/NET from Digital Research provides the building blocks for communicating intelligent microsystems to file and print services. The major problems to be overcome in the implementation of such a network are the dearth of intelligent communication front-ends for the microcomputers and the lack of a rich set of management and software development tools.

  20. US Air Force Behavioral Health Optimization Program: team members' satisfaction and barriers to care.

    PubMed

    Landoll, Ryan R; Nielsen, Matthew K; Waggoner, Kathryn K

    2017-02-01

    Research has shown significant contribution of integrated behavioural health care; however, less is known about the perceptions of primary care providers towards behavioural health professionals. The current study examined barriers to care and satisfaction with integrated behavioural health care from the perspective of primary care team members. This study utilized archival data from 42 treatment facilities as part of ongoing program evaluation of the Air Force Medical Service's Behavioral Health Optimization Program. This study was conducted in a large managed health care organization for active duty military and their families, with specific clinic settings that varied considerably in regards to geographic location, population diversity and size of patient empanelment. De-identified archival data on 534 primary care team members were examined. Team members at larger facilities rated access and acuity concerns as greater barriers than those from smaller facilities (t(533) = 2.57, P < 0.05). Primary Care Managers (PCMs) not only identified more barriers to integrated care (β = -0.07, P < 0.01) but also found services more helpful to the primary care team (t(362.52) = 1.97, P = 0.05). Barriers to care negatively impacted perceived helpfulness of integrated care services for patients (β = -0.12, P < 0.01) and team members, particularly among non-PCMs (β = -0.11, P < 0.01). Findings highlight the potential benefits of targeted training that differs in facilities of larger empanelment and is mindful of team members' individual roles in a Patient Centered Medical Home. In particular, although generally few barriers were perceived, given the impact these barriers have on perception of care, efforts should be made to decrease perceived barriers to integrated behavioural health care among non-PCM team members. Published by Oxford University Press 2016.

  1. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  2. Functional Analysis and Preliminary Specifications for a Single Integrated Central Computer System for Secondary Schools and Junior Colleges. Interim Report.

    ERIC Educational Resources Information Center

    1968

    The present report proposes a central computing facility and presents the preliminary specifications for such a system. It is based, in part, on the results of earlier studies by two previous contractors on behalf of the U.S. Office of Education. The recommendations are based upon the present contractors considered evaluation of the earlier…

  3. EPRI Guide to Managing Nuclear Utility Protective Clothing Programs. PCEVAL User`s Manual, A computer code for evaluating the economics of nuclear plant protective clothing programs: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, J.J.; Kelly, D.M.

    1993-10-01

    The Electric Power Research Institute (EPRI) commissioned a radioactive waste related project (RP2414-34) in 1989 to produce a guide for developing and managing nuclear plant protective clothing programs. Every nuclear facility must coordinate some type of protective clothing program for its radiation workers to ensure proper and safe protection for the wearer and to maintain control over the spread of contamination. Yet, every nuclear facility has developed its own unique program for managing such clothing. Accordingly, a need existed for a reference guide to assist with standardizing protective clothing programs and in controlling the potentially escalating economics of such programs.more » The initial Guide to Managing Nuclear Utility Protective Clothing Programs, NP-7309, was published in May 1991. Since that time, a number of utilities have reviewed and/or used the report to enhance their protective clothing programs. Some of these utilities requested that a computer program be developed to assist utilities in evaluating the economics of protective clothing programs consistent with the guidance in NP-7309. The PCEVAL computer code responds to that industry need. This report, the PCEVAL User`s Manual, provides detailed instruction on use of the software.« less

  4. IPM for Schools: A How-To Manual.

    ERIC Educational Resources Information Center

    Daar, Sheila; Drlik, Tanya; Olkowski, Helga; Olkowski, William

    This report presents guidelines for developing an Integrated Pest Management (IPM) approach for educational facilities, and discusses the unique opportunities an IPM program can provide in the school science curriculum. This includes the hands-on experience IPM affords to students in the areas of biology, ecology, and least-toxic management of…

  5. POLLUTION PREVENTION OPPORTUNITY ASSESSMENT - U.S. POSTAL INSPECTION SERVICE FORENSIC & TECHNICAL SERVICES DIVISION - NATIONAL FORENSIC LABORATORY, DULLES, VIRGINIA

    EPA Science Inventory

    The United States Postal Service (USPS) in cooperation with EPA's National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate waste prevention and recycling activities into the waste management programs at Postal facilities. This report describes the...

  6. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meisner, Robert; McCoy, Michel; Archer, Bill

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less

  7. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

  8. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477

  9. Electric Power Research Institute | Energy Systems Integration Facility |

    Science.gov Websites

    -10 megawatts of aggregated generation capacity. A photo of four men looking at something one man is pointing to on a desk while another man sits at the desk typing on a computer. EPRI and Schneider Electric

  10. Videos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. Publications

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. Collaboration

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  13. Business

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. Features

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. Visitors

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Mission

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Community

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Giving

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. ELECTRICAL RESISTIVITY TECHNIQUE TO ASSESS THE INTEGRITY OF GEOMEMBRANE LINERS

    EPA Science Inventory

    Two-dimensional electrical modeling of a liner system was performed using computer techniques. The modeling effort examined the voltage distributions in cross sections of lined facilities with different leak locations. Results confirmed that leaks in the liner influenced voltage ...

  20. Oak Ridge National Laboratory Core Competencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberto, J.B.; Anderson, T.D.; Berven, B.A.

    1994-12-01

    A core competency is a distinguishing integration of capabilities which enables an organization to deliver mission results. Core competencies represent the collective learning of an organization and provide the capacity to perform present and future missions. Core competencies are distinguishing characteristics which offer comparative advantage and are difficult to reproduce. They exhibit customer focus, mission relevance, and vertical integration from research through applications. They are demonstrable by metrics such as level of investment, uniqueness of facilities and expertise, and national impact. The Oak Ridge National Laboratory (ORNL) has identified four core competencies which satisfy the above criteria. Each core competencymore » represents an annual investment of at least $100M and is characterized by an integration of Laboratory technical foundations in physical, chemical, and materials sciences; biological, environmental, and social sciences; engineering sciences; and computational sciences and informatics. The ability to integrate broad technical foundations to develop and sustain core competencies in support of national R&D goals is a distinguishing strength of the national laboratories. The ORNL core competencies are: 9 Energy Production and End-Use Technologies o Biological and Environmental Sciences and Technology o Advanced Materials Synthesis, Processing, and Characterization & Neutron-Based Science and Technology. The distinguishing characteristics of each ORNL core competency are described. In addition, written material is provided for two emerging competencies: Manufacturing Technologies and Computational Science and Advanced Computing. Distinguishing institutional competencies in the Development and Operation of National Research Facilities, R&D Integration and Partnerships, Technology Transfer, and Science Education are also described. Finally, financial data for the ORNL core competencies are summarized in the appendices.« less

  1. Freeway management handbook

    DOT National Transportation Integrated Search

    1997-08-01

    This handbook, 1997 Freeway Management Handbook, is an update of the 1983 Freeway Management Handbook and reflects the tremendous developments in computing and communications technology. It also reflects the importance of Integrated Transportation Ma...

  2. Tele-Medicine Applications of an ISDN-Based Tele-Working Platform

    DTIC Science & Technology

    2001-10-25

    developed over the Hellenic Integrated Services Digital Network (ISDN), is based on user terminals (personal computers), networking apparatus, and a...key infrastructure, ready to offer enhanced message switching and translation in response to market trends [8]. Three (3) years ago, the Hellenic PTT...should outcome to both an integrated Tele- Working platform, a main central database (completed with maintenance facilities), and a ready-to-be

  3. Development of Integrated Programs for Aerospace-vechicle Design (IPAD). IPAD user requirements: Implementation (first-level IPAD)

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The requirements implementation strategy for first level development of the Integrated Programs for Aerospace Vehicle Design (IPAD) computing system is presented. The capabilities of first level IPAD are sufficient to demonstrated management of engineering data on two computers (CDC CYBER 170/720 and DEC VAX 11/780 computers) using the IPAD system in a distributed network environment.

  4. Atmospheric concentrations of polybrominated diphenyl ethers at near-source sites.

    PubMed

    Cahill, Thomas M; Groskova, Danka; Charles, M Judith; Sanborn, James R; Denison, Michael S; Baker, Lynton

    2007-09-15

    Concentrations of polybrominated diphenyl ethers (PBDEs) were determined in air samples from near suspected sources, namely an indoors computer laboratory, indoors and outdoors at an electronics recycling facility, and outdoors at an automotive shredding and metal recycling facility. The results showed that (1) PBDE concentrations in the computer laboratorywere higherwith computers on compared with the computers off, (2) indoor concentrations at an electronics recycling facility were as high as 650,000 pg/m3 for decabromodiphenyl ether (PBDE 209), and (3) PBDE 209 concentrations were up to 1900 pg/m3 at the downwind fenceline at an automotive shredding/metal recycling facility. The inhalation exposure estimates for all the sites were typically below 110 pg/kg/day with the exception of the indoor air samples adjacent to the electronics shredding equipment, which gave exposure estimates upward of 40,000 pg/kg/day. Although there were elevated inhalation exposures at the three source sites, the exposure was not expected to cause adverse health effects based on the lowest reference dose (RfD) currently in the Integrated Risk Information System (IRIS), although these RfD values are currently being re-evaluated by the U.S. Environmental Protection Agency. More research is needed on the potential health effects of PBDEs.

  5. Solid waste management complex site development plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greager, T.M.

    1994-09-30

    The main purpose of this Solid Waste Management Complex Site Development Plan is to optimize the location of future solid waste treatment and storage facilities and the infrastructure required to support them. An overall site plan is recommended. Further, a series of layouts are included that depict site conditions as facilities are constructed at the SWMC site. In this respect the report serves not only as the siting basis for future projects, but provides siting guidance for Project W-112, as well. The plan is intended to function as a template for expected growth of the site over the next 30more » years so that future facilities and infrastructure will be properly integrated.« less

  6. Material Protection, Accounting, and Control Technologies (MPACT) Advanced Integration Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Mike; Cipiti, Ben; Demuth, Scott Francis

    2017-01-30

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal (Miller, 2015). This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. Thesemore » tools will consist of instrumentation and devices as well as computer software for modeling, simulation and integration.« less

  7. Material Protection, Accounting, and Control Technologies (MPACT) Advanced Integration Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durkee, Joe W.; Cipiti, Ben; Demuth, Scott Francis

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal (Miller, 2015). This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. Thesemore » tools will consist of instrumentation and devices as well as computer software for modeling, simulation and integration.« less

  8. 2017 Joint Annual NDIA/AIA Industrial Security Committee Fall Conference

    DTIC Science & Technology

    2017-11-15

    beyond credit data to offer the insights that government professionals need to make informed decisions and ensure citizen safety, manage compliance...business that provides information technology and professional services. We specialize in managing business processes and systems integration for both... Information Security System ISFD Industrial Security Facilities Database OBMS ODAA Business Management System STEPP Security, Training, Education and

  9. A survey of the computer literacy of undergraduate dental students at a University Dental School in Ireland during the academic year 1997-98.

    PubMed

    Ray, N J; Hannigan, A

    1999-05-01

    As dental practice management becomes more computer-based, the efficient functioning of the dentist will become dependent on adequate computer literacy. A survey has been carried out into the computer literacy of a cohort of 140 undergraduate dental students at a University Dental School in Ireland (years 1-5), in the academic year 1997-98. Aspects investigated by anonymous questionnaire were: (1) keyboard skills; (2) computer skills; (3) access to computer facilities; (4) software competencies and (5) use of medical library computer facilities. The students are relatively unfamiliar with basic computer hardware and software: 51.1% considered their expertise with computers as "poor"; 34.3% had taken a formal typewriting or computer keyboarding course; 7.9% had taken a formal computer course at university level and 67.2% were without access to computer facilities at their term-time residences. A majority of students had never used either word-processing, spreadsheet, or graphics programs. Programs relating to "informatics" were more popular, such as literature searching, accessing the Internet and the use of e-mail which represent the major use of the computers in the medical library. The lack of experience with computers may be addressed by including suitable computing courses at the secondary level (age 13-18 years) and/or tertiary level (FE/HE) education programmes. Such training may promote greater use of generic softwares, particularly in the library, with a more electronic-based approach to data handling.

  10. Real-Time Rocket/Vehicle System Integrated Health Management Laboratory For Development and Testing of Health Monitoring/Management Systems

    NASA Technical Reports Server (NTRS)

    Aguilar, R.

    2006-01-01

    Pratt & Whitney Rocketdyne has developed a real-time engine/vehicle system integrated health management laboratory, or testbed, for developing and testing health management system concepts. This laboratory simulates components of an integrated system such as the rocket engine, rocket engine controller, vehicle or test controller, as well as a health management computer on separate general purpose computers. These general purpose computers can be replaced with more realistic components such as actual electronic controllers and valve actuators for hardware-in-the-loop simulation. Various engine configurations and propellant combinations are available. Fault or failure insertion capability on-the-fly using direct memory insertion from a user console is used to test system detection and response. The laboratory is currently capable of simulating the flow-path of a single rocket engine but work is underway to include structural and multiengine simulation capability as well as a dedicated data acquisition system. The ultimate goal is to simulate as accurately and realistically as possible the environment in which the health management system will operate including noise, dynamic response of the engine/engine controller, sensor time delays, and asynchronous operation of the various components. The rationale for the laboratory is also discussed including limited alternatives for demonstrating the effectiveness and safety of a flight system.

  11. Reduced prevalence and severity of wounds following implementation of the Champions for Skin Integrity model to facilitate uptake of evidence-based practice in aged care.

    PubMed

    Edwards, Helen E; Chang, Anne M; Gibb, Michelle; Finlayson, Kathleen J; Parker, Christina; O'Reilly, Maria; McDowell, Jan; Shuter, Patricia

    2017-12-01

    To evaluate the implementation of the Champions for Skin Integrity model on facilitating uptake of evidence-based wound management and improving skin integrity in residents of aged care facilities. The incidence of skin tears, pressure injuries and leg ulcers increases with age, and such wounds can be a serious issue in aged care facilities. Older adults are not only at higher risk for wounds related to chronic disease but also injuries related to falls and manual handling requirements. A longitudinal, pre-post design. The Champions for Skin Integrity model was developed using evidence-based strategies for transfer of evidence into practice. Data were collected before and six months after implementation of the model. Data on wound management and skin integrity were obtained from two random samples of residents (n = 200 pre; n = 201 post) from seven aged care facilities. A staff survey was also undertaken (n = 126 pre; n = 143 post) of experience, knowledge and evidence-based wound management. Descriptive statistics were calculated for all variables. Where relevant, chi-square for independence or t-tests were used to identify differences between the pre-/postdata. There was a significant decrease in the number of residents with a wound of any type (54% pre vs 43% post, χ 2 4·2, p = 0·041), as well as a significant reduction in specific wound types, for example pressure injuries (24% pre vs 10% post, χ 2 14·1, p < 0·001), following implementation of the model. An increase in implementation of evidence-based wound management and prevention strategies was observed in the postimplementation sample in comparison with the preimplementation sample. This included use of limb protectors and/or protective clothing 6% pre vs 20% post (χ 2 17·0, p < 0·001) and use of an emollient or soap alternative for bathing residents (50% pre vs 74% post, χ 2 13·9, p = 0·001). Implementation of the model in this sample fostered an increase in implementation of evidence-based wound management and prevention strategies, which was associated with a decrease in the prevalence and severity of wounds. This study suggests the Champions for Skin Integrity model has the potential to improve uptake of evidence-based wound management and improve skin integrity for older adults. © 2017 John Wiley & Sons Ltd.

  12. Production Experiences with the Cray-Enabled TORQUE Resource Manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, Matthew A; Maxwell, Don E; Beer, David

    High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less

  13. Securing PCs and Data in Libraries and Schools: A Handbook with Menuing, Anti-Virus, and Other Protective Software.

    ERIC Educational Resources Information Center

    Benson, Allen C.

    This handbook is designed to help readers identify and eliminate security risks, with sound recommendations and library-tested security software. Chapter 1 "Managing Your Facilities and Assessing Your Risks" addresses fundamental management responsibilities including planning for a secure system, organizing computer-related information, assessing…

  14. Application of Computer Assisted Energy Analysis Seminar (Pittsburgh, Pennsylvania, April 12-14, 1977).

    ERIC Educational Resources Information Center

    Association of Physical Plant Administrators of Universities and Colleges, Washington, DC.

    The intent of this seminar presentation was to demonstrate that with proper care in selecting and managing energy analysis programs, or in choosing commercial services to accomplish the same purposes, universities and colleges may derive significant benefits from efficient and economical use and management of their facilities. The workbook begins…

  15. Overview of the Integrated Programs for Aerospace Vehicle Design (IPAD) project

    NASA Technical Reports Server (NTRS)

    Venneri, S. L.

    1983-01-01

    To respond to national needs for improved productivity in engineering design and manufacturing, a NASA supported joint industry/government project is underway denoted Integrated Programs for Aerospace Vehicle Design (IPAD). The objective is to improve engineering productivity through better use of computer technology. It focuses on development of data base management technology and associated software for integrated company wide management of engineering and manufacturing information. Results to date on the IPAD project include an in depth documentation of a representative design process for a large engineering project, the definition and design of computer aided design software needed to support that process, and the release of prototype software to manage engineering information. This paper provides an overview of the IPAD project and summarizes progress to date and future plans.

  16. Integration of prevention of mother-to-child HIV transmission into maternal health services in Senegal.

    PubMed

    Cisse, C

    2017-06-01

    The objective of this study was to assess the level of integration of prevention of mother-to-child HIV transmission (PMTCT) in facilities providing services for maternal, newborn, and child health (MNCH) and reproductive health (RH) in Senegal. The survey, conducted from August through November, 2014, comprised five parts : a literature review to assess the place of this integration in the health policies, standards, and protocols in effect in Senegal; an analysis by direct observation of attitudes and practices of 25 healthcare providers at 5 randomly-selected obstetrics and gynecology departments representative of different levels of the health pyramid; a questionnaire evaluating knowledge and attitudes of 10 providers about the integration of PMTCT services into MNCH/RH facilities; interviews to collect the opinions of 70 clients, including 16 HIV-positive, about the quality of PMTCT services they received; and a questionnaire evaluating knowledge and opinions of 14 policy-makers/managers of health programs focusing on mothers and children about this integration. The literature review revealed several constraints impeding this integration : the policy documents, standards, and protocols of each of the programs involved do not clearly indicate the modalities of this integration; the programs are housed in two different divisions while the national Program against the Human Immunodeficiency Virus reports directly to the Prime Minister; program operations remains generally vertical; the resources for the different programs are not sufficiently shared; there is no integrated training module covering integrated management of pregnancy and delivery; and supervision for each of the different programs is organized separately.The observation of the providers supporting women during pregnancy, during childbirth, and in the postpartum period, showed an effort to integrate PMTCT into the MNCH/RH services delivered daily to clients. But this desire is hampered by many problems, including the inconsistent availability of HIV testing and antiretroviral drugs at program sites and the deficit in training and supervision for PMTCT. Clients interviewed after their contact with providers often complained about the lack of information received about PMTCT. They considered that effective integration of these services would provide them with better quality care while reducing their waiting time, costs, and trips to health facilities. The responses of policymakers and program managers interviewed mostly revealed gaps in their understanding and implementation of the integration. There is an effort to integrate MNCH/RH and PMTCT services at the healthcare facilities we visited. But our investigation revealed many shortcomings in both the screening and support of new or expectant HIV+ mothers. To improve this situation it is necessary to improve the skills and motivation of PMTCT providers, enhance the level of equipment, and empower local maternity wards.

  17. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    capabilities, and new methodologies that allowed NREL to model operations of the Eastern Interconnection at Analyst Power Systems Modeling Researcher Project Manager Power Systems Engineering Center Research Engineer Power Systems Modeling and Control Get the full list of job postings and learn more about working

  18. Industry Day Workshops | Energy Systems Integration Facility | NREL

    Science.gov Websites

    , 2017: Siemens-OMNETRIC Industry Day OMNETRIC Group demonstrated a distributed control hierarchy, based Systems Integration, NREL OMNETRIC Group: Grid Edge Communications and Control Utilizing an OpenFMB NREL Murali Baggu, Manager, Power Systems Operations and Control Group, NREL Santosh Veda, Research

  19. Tier2 Submit Software

    EPA Pesticide Factsheets

    Download this tool for Windows or Mac, which helps facilities prepare a Tier II electronic chemical inventory report. The data can also be exported into the CAMEOfm (Computer-Aided Management of Emergency Operations) emergency planning software.

  20. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    NASA Astrophysics Data System (ADS)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user community comes new requests for algorithms and processing capabilities. To address this demand, OT is developing an extensible service based architecture for integrating community-developed software. This "plugable" approach to Web service deployment will enable new processing and analysis tools to run collocated with OT hosted data.

  1. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  2. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  3. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  4. National meeting to review IPAD status and goals. [Integrated Programs for Aerospace-vehicle Design

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.

    1980-01-01

    A joint NASA/industry project called Integrated Programs for Aerospace-vehicle Design (IPAD) is described, which has the goal of raising aerospace-industry productivity through the application of computers to integrate company-wide management of engineering data. Basically a general-purpose interactive computing system developed to support engineering design processes, the IPAD design is composed of three major software components: the executive, data management, and geometry and graphics software. Results of IPAD activities include a comprehensive description of a future representative aerospace vehicle design process and its interface to manufacturing, and requirements and preliminary design of a future IPAD software system to integrate engineering activities of an aerospace company having several products under simultaneous development.

  5. KSC-02pd0695

    NASA Image and Video Library

    2002-05-15

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, FAA Administrator Patti Smith (second from left) listens to Jim Halsell (right), manager of KSC's Space Shuttle Program Launch Integration, during a tour of KSC.

  6. Theoretical and computational foundations of management class simulation

    Treesearch

    Denie Gerold

    1978-01-01

    Investigations on complicated, complex, and not well-ordered systems are possible only with the aid of mathematical methods and electronic data processing. Simulation as a method of operations research is particularly suitable for this purpose. Theoretical and computational foundations of management class simulation must be integrated into the planning systems of...

  7. Developing mobile- and BIM-based integrated visual facility maintenance management system.

    PubMed

    Lin, Yu-Cheng; Su, Yu-Chih

    2013-01-01

    Facility maintenance management (FMM) has become an important topic for research on the operation phase of the construction life cycle. Managing FMM effectively is extremely difficult owing to various factors and environments. One of the difficulties is the performance of 2D graphics when depicting maintenance service. Building information modeling (BIM) uses precise geometry and relevant data to support the maintenance service of facilities depicted in 3D object-oriented CAD. This paper proposes a new and practical methodology with application to FMM using BIM technology. Using BIM technology, this study proposes a BIM-based facility maintenance management (BIMFMM) system for maintenance staff in the operation and maintenance phase. The BIMFMM system is then applied in selected case study of a commercial building project in Taiwan to verify the proposed methodology and demonstrate its effectiveness in FMM practice. Using the BIMFMM system, maintenance staff can access and review 3D BIM models for updating related maintenance records in a digital format. Moreover, this study presents a generic system architecture and its implementation. The combined results demonstrate that a BIMFMM-like system can be an effective visual FMM tool.

  8. Fully integrated automated security surveillance system: managing a changing world through managed technology and product applications

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Brown, Todd

    2012-06-01

    Integrated security systems are essential to pre-empting criminal assaults. Nearly 500,000 sites have been identified (source: US DHS) as critical infrastructure sites that would suffer severe damage if a security breach should occur. One major breach in any of 123 U.S. facilities, identified as "most critical", threatens more than 1,000,000 people. The vulnerabilities of critical infrastructure are expected to continue and even heighten over the coming years.

  9. Institutional Transformation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    Reducing the energy consumption of large institutions with dozens to hundreds of existing buildings while maintaining and improving existing infrastructure is a critical economic and environmental challenge. SNL's Institutional Transformation (IX) work integrates facilities and infrastructure sustainability technology capabilities and collaborative decision support modeling approaches to help facilities managers at Sandia National Laboratories (SNL) simulate different future energy reduction strategies and meet long term energy conservation goals.

  10. Web Policies

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. Research Opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. Business opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  13. Emergency Communication

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. Civilian Nuclear Program

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. Radical Supercomputing

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Media Contacts

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Capabilities: Science Pillars

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Social Media

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Location and Infrastructure

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Dual Career Services

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Science Briefs

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Teachers (K-12)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Career Videos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. Students (K-12)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. About Us

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Energy Sustainability

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. Energy Security Solutions

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. Reusing Water

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  9. Community Leaders Survey

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. Green Purchasing

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. Mission, Vision, Values

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. News Releases

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  13. Office of Science

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. Regional Education Partners

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. Invoicing, Payments Info

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Obeying Environmental Laws

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Education Office Housing

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Looking inside plutonium

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Community Videos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Cultural Preservation

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Speakers Bureau

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Copyright, Legal

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Protecting Wildlife

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. Community Feature Stories

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Lab Organizations

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Economic Development

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. Higher Education

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. Leadership, Governance

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  9. Quantum Institute

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. STEM Education Programs

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. October 2015

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  12. LANL Contacts

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  13. Applied Energy Program

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  14. STEM Education

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  15. Bradbury Science Museum

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  16. Our History

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  17. Travel Reimbursement

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  18. Operational Excellence

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  19. Management of CAD/CAM information: Key to improved manufacturing productivity

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Brainin, J.

    1984-01-01

    A key element to improved industry productivity is effective management of CAD/CAM information. To stimulate advancements in this area, a joint NASA/Navy/industry project designated Intergrated Programs for Aerospace-Vehicle Design (IPAD) is underway with the goal of raising aerospace industry productivity through advancement of technology to integrate and manage information involved in the design and manufacturing process. The project complements traditional NASA/DOD research to develop aerospace design technology and the Air Force's Integrated Computer-Aided Manufacturing (ICAM) program to advance CAM technology. IPAD research is guided by an Industry Technical Advisory Board (ITAB) composed of over 100 representatives from aerospace and computer companies.

  20. Enforcing compatibility and constraint conditions and information retrieval at the design action

    NASA Technical Reports Server (NTRS)

    Woodruff, George W.

    1990-01-01

    The design of complex entities is a multidisciplinary process involving several interacting groups and disciplines. There is a need to integrate the data in such environments to enhance the collaboration between these groups and to enforce compatibility between dependent data entities. This paper discusses the implementation of a workstation based CAD system that is integrated with a DBMS and an expert system, CLIPS, (both implemented on a mini computer) to provide such collaborative and compatibility enforcement capabilities. The current implementation allows for a three way link between the CAD system, the DBMS and CLIPS. The engineering design process associated with the design and fabrication of sheet metal housing for computers in a large computer manufacturing facility provides the basis for this prototype system.

  1. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  2. Chemical Safety Alert: Safer Technology and Alternatives

    EPA Pesticide Factsheets

    This alert is intended to introduce safer technology concepts and general approaches, explains the concepts and principles, and gives brief examples of the integration of safer technologies into facility risk management activities.

  3. The Hawaiian Electric Companies | Energy Systems Integration Facility |

    Science.gov Websites

    farm in Maui, Hawaii Verification of Voltage Regulation Operating Strategies NREL has studied how Hawaiian Electric Companies can best manage voltage regulation functions from distributed technologies. Two

  4. Integrating repositories with fuel cycles: The airport authority model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsberg, C.

    2012-07-01

    The organization of the fuel cycle is a legacy of World War II and the cold war. Fuel cycle facilities were developed and deployed without consideration of the waste management implications. This led to the fuel cycle model of a geological repository site with a single owner, a single function (disposal), and no other facilities on site. Recent studies indicate large economic, safety, repository performance, nonproliferation, and institutional incentives to collocate and integrate all back-end facilities. Site functions could include geological disposal of spent nuclear fuel (SNF) with the option for future retrievability, disposal of other wastes, reprocessing with fuelmore » fabrication, radioisotope production, other facilities that generate significant radioactive wastes, SNF inspection (navy and commercial), and related services such as SNF safeguards equipment testing and training. This implies a site with multiple facilities with different owners sharing some facilities and using common facilities - the repository and SNF receiving. This requires a different repository site institutional structure. We propose development of repository site authorities modeled after airport authorities. Airport authorities manage airports with government-owned runways, collocated or shared public and private airline terminals, commercial and federal military facilities, aircraft maintenance bases, and related operations - all enabled and benefiting the high-value runway asset and access to it via taxi ways. With a repository site authority the high value asset is the repository. The SNF and HLW receiving and storage facilities (equivalent to the airport terminal) serve the repository, any future reprocessing plants, and others with needs for access to SNF and other wastes. Non-public special-built roadways and on-site rail lines (equivalent to taxi ways) connect facilities. Airport authorities are typically chartered by state governments and managed by commissions with members appointed by the state governor, county governments, and city governments. This structure (1) enables state and local governments to work together to maximize job and tax benefits to local communities and the state, (2) provides a mechanism to address local concerns such as airport noise, and (3) creates an institutional structure with large incentives to maximize the value of the common asset, the runway. A repository site authority would have a similar structure and be the local interface to any national waste management authority. (authors)« less

  5. Microvax-based data management and reduction system for the regional planetary image facilities

    NASA Technical Reports Server (NTRS)

    Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.

    1987-01-01

    Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.

  6. CFD Simulations of the IHF Arc-Jet Flow: Compression-Pad/Separation Bolt Wedge Tests

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Skokova, Kristina A.

    2017-01-01

    This paper reports computational analyses in support of two wedge tests in a high enthalpy arc-jet facility at NASA Ames Research Center. These tests were conducted using two different wedge models, each placed in a free jet downstream of a corresponding different conical nozzle in the Ames 60-MW Interaction Heating Facility. Panel test articles included a metallic separation bolt imbedded in the compression-pad and heat shield materials, resulting in a circular protuberance over a flat plate. As part of the test calibration runs, surface pressure and heat flux measurements on water-cooled calibration plates integrated with the wedge models were also obtained. Surface heating distributions on the test articles as well as arc-jet test environment parameters for each test configuration are obtained through computational fluid dynamics simulations, consistent with the facility and calibration measurements. The present analysis comprises simulations of the non-equilibrium flow field in the facility nozzle, test box, and flow field over test articles, and comparisons with the measured calibration data.

  7. Fusion interfaces for tactical environments: An application of virtual reality technology

    NASA Technical Reports Server (NTRS)

    Haas, Michael W.

    1994-01-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.

  8. The FIFE Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Box, D.; Boyd, J.; Di Benedetto, V.

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less

  9. An Integrated Decision Support System for Planning and Measuring Institutional Efficiency. AIR 1992 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Minnaar, Phil C.

    This paper presents a model for obtaining and organizing managment information for decision making in university planning, developed by the Bureau for Management Information of the University of South Africa. The model identifies the fundamental entities of the university as environment, finance, physical facilities, assets, personnel, and…

  10. Development of Onboard Computer Complex for Russian Segment of ISS

    NASA Technical Reports Server (NTRS)

    Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.

    1998-01-01

    Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.

  11. SDN-NGenIA, a software defined next generation integrated architecture for HEP and data intensive science

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Hendricks, T. W.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.

    2017-10-01

    The SDN Next Generation Integrated Architecture (SDN-NGeNIA) project addresses some of the key challenges facing the present and next generations of science programs in HEP, astrophysics, and other fields, whose potential discoveries depend on their ability to distribute, process and analyze globally distributed Petascale to Exascale datasets. The SDN-NGenIA system under development by Caltech and partner HEP and network teams is focused on the coordinated use of network, computing and storage infrastructures, through a set of developments that build on the experience gained in recently completed and previous projects that use dynamic circuits with bandwidth guarantees to support major network flows, as demonstrated across LHC Open Network Environment [1] and in large scale demonstrations over the last three years, and recently integrated with PhEDEx and Asynchronous Stage Out data management applications of the CMS experiment at the Large Hadron Collider. In addition to the general program goals of supporting the network needs of the LHC and other science programs with similar needs, a recent focus is the use of the Leadership HPC facility at Argonne National Lab (ALCF) for data intensive applications.

  12. Long-Term Preservation and Advanced Access Services to Archived Data: The Approach of a System Integrator

    NASA Astrophysics Data System (ADS)

    Petitjean, Gilles; de Hauteclocque, Bertrand

    2004-06-01

    EADS Defence and Security Systems (EADS DS SA) have developed an expertise as integrator of archive management systems for both their commercial and defence customers (ESA, CNES, EC, EUMETSAT, French MOD, US DOD, etc.), especially in Earth Observation and in Meteorology fields.The concern of valuable data owners is both their long-term preservation but also the integration of the archive in their information system with in particular an efficient access to archived data for their user community. The system integrator answers to this requirement by a methodology combining understanding of user needs, exhaustive knowledge of the existing solutions both for hardware and software elements and development and integration ability. The system integrator completes the facility development by support activities.The long-term preservation of archived data obviously involves a pertinent selection of storage media and archive library. This selection relies on storage technology survey but the selection criteria depend on the analysis of the user needs. The system integrator will recommend the best compromise for implementing an archive management facility, thanks to its knowledge and its independence of storage market and through the analysis of the user requirements. He will provide a solution, which is able to evolve to take advantage of the storage technology progress.But preserving the data for long-term is not only a question of storage technology. Some functions are required to secure the archive management system against contingency situation: multiple data set copies using operational procedures, active quality control of the archived data, migration policy optimising the cost of ownership.

  13. The flight robotics laboratory

    NASA Technical Reports Server (NTRS)

    Tobbe, Patrick A.; Williamson, Marlin J.; Glaese, John R.

    1988-01-01

    The Flight Robotics Laboratory of the Marshall Space Flight Center is described in detail. This facility, containing an eight degree of freedom manipulator, precision air bearing floor, teleoperated motion base, reconfigurable operator's console, and VAX 11/750 computer system, provides simulation capability to study human/system interactions of remote systems. The facility hardware, software and subsequent integration of these components into a real time man-in-the-loop simulation for the evaluation of spacecraft contact proximity and dynamics are described.

  14. The drainage information and control system of smart city

    NASA Astrophysics Data System (ADS)

    Mao, Tonglei; Li, Lei; Liu, JiChang; Cheng, Liang; Zhang, Jing; Song, Zengzhong; Liu, Lianhai; Hu, Zichen

    2018-03-01

    At present, due to the continuous expansion of city and the increase of the municipal drainage facilities, which leads to a serious lack of management and operation personnel, the existing production management pattern already can't adapt to the new requirements. In this paper, according to river drainage management, flood control, water management, auditing, administrative license, etc. different business management requirement, an information management system for water planning and design of smart city based on WebGIS in Linyi was introduced, which can collect the various information of gate dam, water pump, bridge sensor and traffic guide terminal nodes etc. together. The practical application show that the system can not only implement the sharing, resources integration and collaborative application for the regional water information, but also improve the level of the integrated water management.

  15. Complex ambulatory settings demand scheduling systems.

    PubMed

    Ross, K M

    1998-01-01

    Practice management systems are becoming more and more complex, as they are asked to integrate all aspects of patient and resource management. Although patient scheduling is a standard expectation in any ambulatory environment, facilities and equipment resource scheduling are additional functionalities of scheduling systems. Because these functions were not typically managed in manual patient scheduling, often the result was resource mismanagement, along with a potential negative impact on utilization, patient flow and provider productivity. As ambulatory organizations have become more seasoned users of practice management software, the value of resource scheduling has become apparent. Appointment scheduling within a fully integrated practice management system is recognized as an enhancement of scheduling itself and provides additional tools to manage other information needs. Scheduling, as one component of patient information management, provides additional tools in these areas.

  16. The Minds Behind the Schools.

    ERIC Educational Resources Information Center

    Hawkins, Beth Leibson

    2001-01-01

    Highlights three individuals whose ideas have contributed to some groundbreaking educational facilities. Two individuals have developed schools that are centers of their communities while the third is expert at designing integrated pest management systems.(GR)

  17. Light Microscopy Module Imaging Tested and Demonstrated

    NASA Technical Reports Server (NTRS)

    Gati, Frank

    2004-01-01

    The Fluids Integrated Rack (FIR), a facility-class payload, and the Light Microscopy Module (LMM), a subrack payload, are integrated research facilities that will fly in the U.S. Laboratory module, Destiny, aboard the International Space Station. Both facilities are being engineered, designed, and developed at the NASA Glenn Research Center by Northrop Grumman Information Technology. The FIR is a modular, multiuser scientific research facility that is one of two racks that make up the Fluids and Combustion Facility (the other being the Combustion Integrated Rack). The FIR has a large volume dedicated for experimental hardware; easily reconfigurable diagnostics, power, and data systems that allow for unique experiment configurations; and customizable software. The FIR will also provide imagers, light sources, power management and control, command and data handling for facility and experiment hardware, and data processing and storage. The first payload in the FIR will be the LMM. The LMM integrated with the FIR is a remotely controllable, automated, on-orbit microscope subrack facility, with key diagnostic capabilities for meeting science requirements--including video microscopy to observe microscopic phenonema and dynamic interactions, interferometry to make thin-film measurements with nanometer resolution, laser tweezers to manipulate micrometer-sized particles, confocal microscopy to provide enhanced three-dimensional visualization of structures, and spectrophotometry to measure the photonic properties of materials. Vibration disturbances were identified early in the LMM development phase as a high risk for contaminating the science microgravity environment. An integrated FIR-LMM test was conducted in Glenn's Acoustics Test Laboratory to assess mechanical sources of vibration and their impact to microscopic imaging. The primary purpose of the test was to characterize the LMM response at the sample location, the x-y stage within the microscope, to vibration emissions from the FIR and LMM support structures.

  18. Integrated, long term, sustainable, cost effective biosolids management at a large Canadian wastewater treatment facility.

    PubMed

    Leblanc, R J; Allain, C J; Laughton, P J; Henry, J G

    2004-01-01

    The Greater Moncton Sewerage Commission's 115,000 m3/d advanced, chemically assisted primary wastewater treatment facility located in New Brunswick, Canada, has developed an integrated, long term, sustainable, cost effective programme for the management and beneficial utilization of biosolids from lime stabilized raw sludge. The paper overviews biosolids production, lime stabilization, conveyance, and odour control followed by an indepth discussion of the wastewater sludge as a resource programme, namely: composting, mine site reclamation, landfill cover, land application for agricultural use, tree farming, sod farm base as a soil enrichment, topsoil manufacturing. The paper also addresses the issues of metals, pathogens, organic compounds, the quality control program along with the regulatory requirements. Biosolids capital and operating costs are presented. Research results on removal of metals from primary sludge using a unique biological process known as BIOSOL as developed by the University of Toronto, Canada to remove metals and destroy pathogens are presented. The paper also discusses an ongoing cooperative research project with the Université de Moncton where various mixtures of plant biosolids are composted with low quality soil. Integration, approach to sustainability and "cumulative effects" as part of the overall biosolids management strategy are also discussed.

  19. Laboratory Directed Research & Development (LDRD)

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  20. Payments to the Lab

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  1. Nuclear Deterrence and Stockpile Stewardship

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  2. Emerging Threats and Opportunities

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  3. Living in Los Alamos

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  4. Protecting Against Nuclear Threats

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  5. Ion Beam Materials Lab

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  6. Frontiers in Science Lectures

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  7. 70+ Years of Innovations

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  8. Center for Nonlinear Studies

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  9. Taking Care of our Trails

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  10. What We Monitor & Why

    Science.gov Websites

    Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron

  11. The application of virtual reality systems as a support of digital manufacturing and logistics

    NASA Astrophysics Data System (ADS)

    Golda, G.; Kampa, A.; Paprocka, I.

    2016-08-01

    Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.

  12. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  13. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  14. Hazardous Materials Pharmacies - A Vital Component of a Robust P2 Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarter, S.

    2006-07-01

    Integrating pollution prevention (P2) into the Department of Energy Integrated Safety Management (ISM) - Environmental Management System (EMS) approach, required by DOE Order 450.1, leads to an enhanced ISM program at large and complex installations and facilities. One of the building blocks to integrating P2 into a comprehensive environmental and safety program is the control and tracking of the amounts, types, and flow of hazardous materials used on a facility. Hazardous materials pharmacies (typically called HazMarts) provide a solid approach to resolving this issue through business practice changes that reduce use, avoid excess, and redistribute surplus. If understood from conceptmore » to implementation, the HazMart is a powerful tool for reducing pollution at the source, tracking inventory storage, controlling usage and flow, and summarizing data for reporting requirements. Pharmacy options can range from a strict, single control point for all hazardous materials to a virtual system, where the inventory is user controlled and reported over a common system. Designing and implementing HazMarts on large, diverse installations or facilities present a unique set of issues. This is especially true of research and development (R and D) facilities where the chemical use requirements are extensive and often classified. There are often multiple sources of supply; a wide variety of chemical requirements; a mix of containers ranging from small ampoules to large bulk storage tanks; and a wide range of tools used to track hazardous materials, ranging from simple purchase inventories to sophisticated tracking software. Computer systems are often not uniform in capacity, capability, or operating systems, making it difficult to use a server-based unified tracking system software. Each of these issues has a solution or set of solutions tied to fundamental business practices. Each requires an understanding of the problem at hand, which, in turn, requires good communication among all potential users. A key attribute to a successful HazMart is that everybody must use the same program. That requirement often runs directly into the biggest issue of all... institutional resistance to change. To be successful, the program has to be both a top-down and bottom-up driven process. The installation or facility must set the policy and the requirement, but all of the players have to buy in and participate in building and implementing the program. Dynamac's years of experience assessing hazardous materials programs, providing business case analyses, and recommending and implementing pharmacy approaches for federal agencies has provided us with key insights into the issues, problems, and the array of solutions available. This paper presents the key steps required to implement a HazMart, explores the advantages and pitfalls associated with a HazMart, and presents some options for implementing a pharmacy or HazMart on complex installations and R and D facilities. (authors)« less

  15. Design of Center-TRACON Automation System

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Davis, Thomas J.; Green, Steven

    1993-01-01

    A system for the automated management and control of terminal area traffic, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA Ames Research Center. In a cooperative program, NASA and FAA have efforts underway to install and evaluate the system at the Denver area and Dallas/Ft. Worth area air traffic control facilities. This paper will review CTAS architecture, and automation functions as well as the integration of CTAS into the existing operational system. CTAS consists of three types of integrated tools that provide computer-generated advisories for both en-route and terminal area controllers to guide them in managing and controlling arrival traffic efficiently. One tool, the Traffic Management Advisor (TMA), generates runway assignments, landing sequences and landing times for all arriving aircraft, including those originating from nearby feeder airports. TMA also assists in runway configuration control and flow management. Another tool, the Descent Advisor (DA), generates clearances for the en-route controllers handling arrival flows to metering gates. The DA's clearances ensure fuel-efficient and conflict free descents to the metering gates at specified crossing times. In the terminal area, the Final Approach Spacing Tool (FAST) provides heading and speed advisories that help controllers produce an accurately spaced flow of aircraft on the final approach course. Data bases consisting of several hundred aircraft performance models, airline preferred operational procedures, and a three dimensional wind model support the operation of CTAS. The first component of CTAS, the Traffic Management Advisor, is being evaluated at the Denver TRACON and the Denver Air Route Traffic Control Center. The second component, the Final Approach Spacing Tool, will be evaluated in several stages at the Dallas/Fort Worth Airport beginning in October 1993. An initial stage of the Descent Advisor tool is being prepared for testing at the Denver Center in late 1994. Operational evaluations of all three integrated CTAS tools are expected to begin at the two field sites in 1995.

  16. A Survey of Knowledge Management Skills Acquisition in an Online Team-Based Distributed Computing Course

    ERIC Educational Resources Information Center

    Thomas, Jennifer D. E.

    2007-01-01

    This paper investigates students' perceptions of their acquisition of knowledge management skills, namely thinking and team-building skills, resulting from the integration of various resources and technologies into an entirely team-based, online upper level distributed computing (DC) information systems (IS) course. Results seem to indicate that…

  17. Energy Systems Integration Facility News | Energy Systems Integration

    Science.gov Websites

    , 2018 News Release: NREL Taps Young to Oversee Geothermal Energy Program In her new role, Young will work closely with NREL management to establish the lab's geothermal energy portfolio, including research and development geared toward advancing the use of geothermal energy as a renewable power source

  18. 10 CFR Appendix D to Subpart D of... - Classes of Actions That Normally Require EISs

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .../operation/decommissioning of reactors D5. Main transmission system additions D6. Integrating transmission... waste) D1Strategic Systems, as defined in DOE Order 430.1, “Life-Cycle Asset Management,” and designated... facilities (that is, transmission system additions for integrating major new sources of generation into a...

  19. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  20. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

Top