Closely Spaced Independent Parallel Runway Simulation.
1984-10-01
facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where
Central Computational Facility CCF communications subsystem options
NASA Technical Reports Server (NTRS)
Hennigan, K. B.
1979-01-01
A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.
Brief Survey of TSC Computing Facilities
DOT National Transportation Integrated Search
1972-05-01
The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...
NASA Technical Reports Server (NTRS)
Redhed, D. D.
1978-01-01
Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.
Michael Ernst
2017-12-09
As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,
Introduction to the LaRC central scientific computing complex
NASA Technical Reports Server (NTRS)
Shoosmith, John N.
1993-01-01
The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peavler, J.
1979-06-01
This publication gives details about hardware, software, procedures, and services of the Central Computing Facility, as well as information about how to become an authorized user. Languages, compilers' libraries, and applications packages available are described. 17 tables. (RWR)
ERIC Educational Resources Information Center
1968
The present report proposes a central computing facility and presents the preliminary specifications for such a system. It is based, in part, on the results of earlier studies by two previous contractors on behalf of the U.S. Office of Education. The recommendations are based upon the present contractors considered evaluation of the earlier…
Energy consumption and load profiling at major airports. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, J.
1998-12-01
This report describes the results of energy audits at three major US airports. These studies developed load profiles and quantified energy usage at these airports while identifying procedures and electrotechnologies that could reduce their power consumption. The major power consumers at the airports studied included central plants, runway and taxiway lighting, fuel farms, terminals, people mover systems, and hangar facilities. Several major findings emerged during the study. The amount of energy efficient equipment installed at an airport is directly related to the age of the facility. Newer facilities had more energy efficient equipment while older facilities had much of themore » original electric and natural gas equipment still in operation. As redesign, remodeling, and/or replacement projects proceed, responsible design engineers are selecting more energy efficient equipment to replace original devices. The use of computer-controlled energy management systems varies. At airports, the primary purpose of these systems is to monitor and control the lighting and environmental air conditioning and heating of the facility. Of the facilities studied, one used computer management extensively, one used it only marginally, and one had no computer controlled management devices. At all of the facilities studied, natural gas is used to provide heat and hot water. Natural gas consumption is at its highest in the months of November, December, January, and February. The Central Plant contains most of the inductive load at an airport and is also a major contributor to power consumption inefficiency. Power factor correction equipment was used at one facility but was not installed at the other two facilities due to high power factor and/or lack of need.« less
CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network
DOE Office of Scientific and Technical Information (OSTI.GOV)
None,
1976-07-01
This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less
1993-06-01
administering contractual support for lab-wide or multiple buys of ADP systems, software, and services. Computer systems located in the Central Computing Facility...Code Dr. D.L. Bradley Vacant Mrs. N.J. Beauchamp Dr. W.A. Kuperman Dr. E.R. Franchi Dr. M.H. Orr Dr. J.A. Bucaro Mr. L.B. Palmer Dr. D.J. Ramsdale Mr
Protocols for Handling Messages Between Simulation Computers
NASA Technical Reports Server (NTRS)
Balcerowski, John P.; Dunnam, Milton
2006-01-01
Practical Simulator Network (PSimNet) is a set of data-communication protocols designed especially for use in handling messages between computers that are engaging cooperatively in real-time or nearly-real-time training simulations. In a typical application, computers that provide individualized training at widely dispersed locations would communicate, by use of PSimNet, with a central host computer that would provide a common computational- simulation environment and common data. Originally intended for use in supporting interfaces between training computers and computers that simulate the responses of spacecraft scientific payloads, PSimNet could be especially well suited for a variety of other applications -- for example, group automobile-driver training in a classroom. Another potential application might lie in networking of automobile-diagnostic computers at repair facilities to a central computer that would compile the expertise of numerous technicians and engineers and act as an expert consulting technician.
Scientific Computing Strategic Plan for the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Eric Todd
Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less
Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.
The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less
Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC
NASA Technical Reports Server (NTRS)
Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet
1999-01-01
The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.
Refurbishment and Automation of Thermal Vacuum Facilities at NASA/GSFC
NASA Technical Reports Server (NTRS)
Dunn, Jamie; Gomez, Carlos; Donohue, John; Johnson, Chris; Palmer, John; Sushon, Janet
1998-01-01
The thermal vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the eleven facilities, currently ten of the systems are scheduled for refurbishment or replacement as part of a five-year implementation. Expected return on investment includes the reduction in test schedules, improvements in safety of facility operations, and reduction in the personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering, and for the automation of thermal vacuum facilities and tests. Automation of the thermal vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs), the use of Supervisory Control and Data Acquisition (SCADA) systems, and the development of a centralized Test Data Management System. These components allow the computer control and automation of mechanical components such as valves and pumps. The project of refurbishment and automation began in 1996 and has resulted in complete computer control of one facility (Facility 281), and the integration of electronically controlled devices and PLCs in multiple others.
Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen
2016-08-18
The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.
A user view of office automation or the integrated workstation
NASA Technical Reports Server (NTRS)
Schmerling, E. R.
1984-01-01
Central data bases are useful only if they are kept up to date and easily accessible in an interactive (query) mode rather than in monthly reports that may be out of date and must be searched by hand. The concepts of automatic data capture, data base management and query languages require good communications and readily available work stations to be useful. The minimal necessary work station is a personal computer which can be an important office tool if connected into other office machines and properly integrated into an office system. It has a great deal of flexibility and can often be tailored to suit the tastes, work habits and requirements of the user. Unlike dumb terminals, there is less tendency to saturate a central computer, since its free standing capabilities are available after down loading a selection of data. The PC also permits the sharing of many other facilities, like larger computing power, sophisticated graphics programs, laser printers and communications. It can provide rapid access to common data bases able to provide more up to date information than printed reports. Portable computers can access the same familiar office facilities from anywhere in the world where a telephone connection can be made.
2012-02-17
Industrial Area Construction: Located 5 miles south of Launch Complex 39, construction of the main buildings -- Operations and Checkout Building, Headquarters Building, and Central Instrumentation Facility – began in 1963. In 1992, the Space Station Processing Facility was designed and constructed for the pre-launch processing of International Space Station hardware that was flown on the space shuttle. Along with other facilities, the industrial area provides spacecraft assembly and checkout, crew training, computer and instrumentation equipment, hardware preflight testing and preparations, as well as administrative offices. Poster designed by Kennedy Space Center Graphics Department/Greg Lee. Credit: NASA
NASA Technical Reports Server (NTRS)
1974-01-01
The specifications and functions of the Central Data Processing (CDPF) Facility which supports the Earth Observatory Satellite (EOS) are discussed. The CDPF will receive the EOS sensor data and spacecraft data through the Spaceflight Tracking and Data Network (STDN) and the Operations Control Center (OCC). The CDPF will process the data and produce high density digital tapes, computer compatible tapes, film and paper print images, and other data products. The specific aspects of data inputs and data processing are identified. A block diagram of the CDPF to show the data flow and interfaces of the subsystems is provided.
NASA Astrophysics Data System (ADS)
Clayton, R. W.; Kohler, M. D.; Massari, A.; Heaton, T. H.; Guy, R.; Chandy, M.; Bunn, J.; Strand, L.
2014-12-01
The CSN is now in its 3rdyear of operation and has expanded to 400 stations in the Los Angeles region. The goal of the network is to produce a map of strong shaking immediately following a major earthquake as a proxy for damage and a guide for first responders. We have also instrumented a number of buildings with the goal of determining the state of health of these structures before and after they have been shaken. In one 15-story structure, our sensors distributed two per floor, and show body waves propagating in the structure after a moderate local earthquake (M4.4 in Encino, CA). Sensors in a 52-story structure, which we plan to instrument with two sensors per floor as well, show the modes of the building (see Figure) down to the fundamental mode at 5 sec due to a M5.1 earthquake in La Habra, CA. The CSN utilizes a number of technologies that will likely be important in building robust low-cost networks. These include: Distributed computing - the sensors themselves are smart-sensors that perform the basic detection and size estimation in the onboard computers and send the results immediately (without packetization latency) to the central facility. Cloud computing - the central facility is housed in the cloud, which means it is more robust than a local site, and has expandable computing resources available so that it can operate with minimal resources during quiet times but still be able to exploit an very large computing facility during an earthquake. Low-cost/low-maintenance sensors - the MEM sensors are capable of staying onscale to +/- 2g, and can measure events in the Los Angeles Basin a low as magnitude 3.
Template Interfaces for Agile Parallel Data-Intensive Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.
Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less
Expansion of Enterprise Requirements and Acquisition Model
2012-06-04
upgrades in technology that made it more lethal with a smaller force. Computer technology, GPS, and stealth are just a few examples that allowed...The facility consists of banks of networked computers , large displays all built around a centralized workspace. It can be seen in Figure 3. The...first was to meet a gap in UHF satellite communciations for the Navy. This was satisfied as a Tier-1 program by purchasing additional bandwidth
NASA Technical Reports Server (NTRS)
Felberg, F. H.
1984-01-01
The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.
The Electronic Cottage. State-of-the-Art Paper.
ERIC Educational Resources Information Center
Morf, Martin; Alexander, Philip
This paper provides an overview of the information currently available on the prospects of electronic work at home. The first major section examines the technological environment that makes electronic home work possible. Central and dispersed computer facilities, internal and external means of communication, work stations, software, and security…
Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...
Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...
NASA Technical Reports Server (NTRS)
Dundas, T. R.
1981-01-01
The development and capabilities of the Montana geodata system are discussed. The system is entirely dependent on the state's central data processing facility which serves all agencies and is therefore restricted to batch mode processing. The computer graphics equipment is briefly described along with its application to state lands and township mapping and the production of water quality interval maps.
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Meisner, D. E. (Principal Investigator)
1980-01-01
An investigation was conducted into ways to improve the involvement of state and local user personnel in the digital image analysis process by isolating those elements of the analysis process which require extensive involvement by field personnel and providing means for performing those activities apart from a computer facility. In this way, the analysis procedure can be converted from a centralized activity focused on a computer facility to a distributed activity in which users can interact with the data at the field office level or in the field itself. A general image processing software was developed on the University of Minnesota computer system (Control Data Cyber models 172 and 74). The use of color hardcopy image data as a primary medium in supervised training procedures was investigated and digital display equipment and a coordinate digitizer were procured.
NASA Astrophysics Data System (ADS)
James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.
2018-03-01
Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.
NASA Technical Reports Server (NTRS)
Cushman, Paula P.
1993-01-01
Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.
NASA Astrophysics Data System (ADS)
Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.
2016-04-01
Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).
Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel
NASA Technical Reports Server (NTRS)
Fox, C. H., Jr.
1980-01-01
The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.
ERA 1103 UNIVAC 2 Calculating Machine
1955-09-21
The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.
Data Recording Room in the 10-by 10-Foot Supersonic Wind Tunnel
1973-04-21
The test data recording equipment located in the office building of the 10-by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center. The data system was the state of the art when the facility began operating in 1955 and was upgraded over time. NASA engineers used solenoid valves to measure pressures from different locations within the test section. Up 48 measurements could be fed into a single transducer. The 10-by 10 data recorders could handle up to 200 data channels at once. The Central Automatic Digital Data Encoder (CADDE) converted this direct current raw data from the test section into digital format on magnetic tape. The digital information was sent to the Lewis Central Computer Facility for additional processing. It could also be displayed in the control room via strip charts or oscillographs. The 16-by 56-foot long ERA 1103 UNIVAC mainframe computer processed most of the digital data. The paper tape with the raw data was fed into the ERA 1103 which performed the needed calculations. The information was then sent back to the control room. There was a lag of several minutes before the computed information was available, but it was exponentially faster than the hand calculations performed by the female computers. The 10- by 10-foot tunnel, which had its official opening in May 1956, was built under the Congressional Unitary Plan Act which coordinated wind tunnel construction at the NACA, Air Force, industry, and universities. The 10- by 10 was the largest of the three NACA tunnels built under the act.
Microcosm to Cosmos: The Growth of a Divisional Computer Network
Johannes, R.S.; Kahane, Stephen N.
1987-01-01
In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.
Tele-Medicine Applications of an ISDN-Based Tele-Working Platform
2001-10-25
developed over the Hellenic Integrated Services Digital Network (ISDN), is based on user terminals (personal computers), networking apparatus, and a...key infrastructure, ready to offer enhanced message switching and translation in response to market trends [8]. Three (3) years ago, the Hellenic PTT...should outcome to both an integrated Tele- Working platform, a main central database (completed with maintenance facilities), and a ready-to-be
Baumgart, André; Denz, Christof; Bender, Hans-Joachim; Schleppers, Alexander
2009-01-01
The complexity of the operating room (OR) requires that both structural (eg, department layout) and behavioral (eg, staff interactions) patterns of work be considered when developing quality improvement strategies. In our study, we investigated how these contextual factors influence outpatient OR processes and the quality of care delivered. The study setting was a German university-affiliated hospital performing approximately 6000 outpatient surgeries annually. During the 3-year-study period, the hospital significantly changed its outpatient OR facility layout from a decentralized (ie, ORs in adjacent areas of the building) to a centralized (ie, ORs in immediate vicinity of each other) design. To study the impact of the facility change on OR processes, we used a mixed methods approach, including process analysis, process modeling, and social network analysis of staff interactions. The change in facility layout was seen to influence OR processes in ways that could substantially affect patient outcomes. For example, we found a potential for more errors during handovers in the new centralized design due to greater interdependency between tasks and staff. Utilization of the mixed methods approach in our analysis, as compared with that of a single assessment method, enabled a deeper understanding of the OR work context and its influence on outpatient OR processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampley, C.M.
1981-01-01
This report describes many of the computational methods employed within the SKYSHINE-II program. A brief description of the new data base is included, as is a description of the input data requirements and formats needed to properly execute a SKYSHINE-II problem. Utilization instructions for the program are provided for operation of the SKYSHINE-II Code on the Brookhaven National Laboratory Central Scientific Computing Facility (See NUREG/CR-0781, RRA-T7901 for complete information).
Management and development of local area network upgrade prototype
NASA Technical Reports Server (NTRS)
Fouser, T. J.
1981-01-01
Given the situation of having management and development users accessing a central computing facility and given the fact that these same users have the need for local computation and storage, the utilization of a commercially available networking system such as CP/NET from Digital Research provides the building blocks for communicating intelligent microsystems to file and print services. The major problems to be overcome in the implementation of such a network are the dearth of intelligent communication front-ends for the microcomputers and the lack of a rich set of management and software development tools.
Mullaney, John R.; Schwarz, Gregory E.
2013-01-01
The total nitrogen load to Long Island Sound from Connecticut and contributing areas to the north was estimated for October 1998 to September 2009. Discrete measurements of total nitrogen concentrations and continuous flow data from 37 water-quality monitoring stations in the Long Island Sound watershed were used to compute total annual nitrogen yields and loads. Total annual computed yields and basin characteristics were used to develop a generalized-least squares regression model for use in estimating the total nitrogen yields from unmonitored areas in coastal and central Connecticut. Significant variables in the regression included the percentage of developed land, percentage of row crops, point-source nitrogen yields from wastewater-treatment facilities, and annual mean streamflow. Computed annual median total nitrogen yields at individual monitoring stations ranged from less than 2,000 pounds per square mile in mostly forested basins (typically less than 10 percent developed land) to more than 13,000 pounds per square mile in urban basins (greater than 40 percent developed) with wastewater-treatment facilities and in one agricultural basin. Medians of computed total annual nitrogen yields for water years 1999–2009 at most stations were similar to those previously computed for water years 1988–98. However, computed medians of annual yields at several stations, including the Naugatuck River, Quinnipiac River, and Hockanum River, were lower than during 1988–98. Nitrogen yields estimated for 26 unmonitored areas downstream from monitoring stations ranged from less than 2,000 pounds per square mile to 34,000 pounds per square mile. Computed annual total nitrogen loads at the farthest downstream monitoring stations were combined with the corresponding estimates for the downstream unmonitored areas for a combined estimate of the total nitrogen load from the entire study area. Resulting combined total nitrogen loads ranged from 38 to 68 million pounds per year during water years 1999–2009. Total annual loads from the monitored basins represent 63 to 74 percent of the total load. Computed annual nitrogen loads from four stations near the Massachusetts border with Connecticut represent 52 to 54 percent of the total nitrogen load during water years 2008–9, the only years with data for all the border sites. During the latter part of the 1999–2009 study period, total nitrogen loads to Long Island Sound from the study area appeared to increase slightly. The apparent increase in loads may be due to higher than normal streamflows, which consequently increased nonpoint nitrogen loads during the study, offsetting major reductions of nitrogen from wastewater-treatment facilities. Nitrogen loads from wastewater treatment facilities declined as much as 2.3 million pounds per year in areas of Connecticut upstream from the monitoring stations and as much as 5.8 million pounds per year in unmonitored areas downstream in coastal and central Connecticut.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.1 Scope. This part contains the regulations implementing the National Credit Union Central Liquidity Facility Act, subchapter III of the Federal Credit Union Act. The National Credit Union Administration Central Liquidity Facility is a mixed-ownership Government corporation...
High-Performance Computing User Facility | Computational Science | NREL
User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access
Distributed computing for macromolecular crystallography
Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles
2018-01-01
Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240
Distributed computing for macromolecular crystallography.
Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles
2018-02-01
Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.
NASA Technical Reports Server (NTRS)
1994-01-01
In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dearing, J F; Rose, S D; Nelson, W R
The predicted computational results of two well-known sub-channel analysis codes, COBRA-III-C and SABRE-I (wire wrap version), have been evaluated by comparison with steady state temperature data from the THORS Facility at ORNL. Both codes give good predictions of transverse and axial temperatures when compared with wire wrap thermocouple data. The crossflow velocity profiles predicted by these codes are similar which is encouraging since the wire wrap models are based on different assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolic, R J
This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less
12 CFR 741.210 - Central liquidity facility.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Central liquidity facility. 741.210 Section 741... Unions That Also Apply to Federally Insured State-Chartered Credit Unions § 741.210 Central liquidity... Liquidity Facility, shall adhere to the requirements stated in part 725 of this chapter. ...
Design of a radiation facility for very small specimens used in radiobiology studies
NASA Astrophysics Data System (ADS)
Rodriguez, Manuel; Jeraj, Robert
2008-06-01
A design of a radiation facility for very small specimens used in radiobiology is presented. This micro-irradiator has been primarily designed to irradiate partial bodies in zebrafish embryos 3-4 mm in length. A miniature x-ray, 50 kV photon beam, is used as a radiation source. The source is inserted in a cylindrical brass collimator that has a pinhole of 1.0 mm in diameter along the central axis to produce a pencil photon beam. The collimator with the source is attached underneath a computer-controlled movable table which holds the specimens. Using a 45° tilted mirror, a digital camera, connected to the computer, takes pictures of the specimen and the pinhole collimator. From the image provided by the camera, the relative distance from the specimen to the pinhole axis is calculated and coordinates are sent to the movable table to properly position the samples in the beam path. Due to its monitoring system, characteristic of the radiation beam, accuracy and precision of specimen positioning, and automatic image-based specimen recognition, this radiation facility is a suitable tool to irradiate partial bodies in zebrafish embryos, cell cultures or any other small specimen used in radiobiology research.
1983-12-01
while at the same time improving its operational efficiency. Through their integration and use, System Program Managers have a comprehensive analytical... systems . The NRLA program is hosted on the CREATE Operating System and contains approxiamately 5500 lines of computer code. It consists of a main...associated with C alternative maintenance plans. As the technological complexity of weapons systems has increased new and innovative logisitcal support
An electric propulsion long term test facility
NASA Technical Reports Server (NTRS)
Trump, G.; James, E.; Vetrone, R.; Bechtel, R.
1979-01-01
An existing test facility was modified to provide for extended testing of multiple electric propulsion thruster subsystems. A program to document thruster subsystem characteristics as a function of time is currently in progress. The facility is capable of simultaneously operating three 2.7-kW, 30-cm mercury ion thrusters and their power processing units. Each thruster is installed via a separate air lock so that it can be extended into the 7m x 10m main chamber without violating vacuum integrity. The thrusters exhaust into a 3m x 5m frozen mercury target. An array of cryopanels collect sputtered target material. Power processor units are tested in an adjacent 1.5m x 2m vacuum chamber or accompanying forced convection enclosure. The thruster subsystems and the test facility are designed for automatic unattended operation with thruster operation computer controlled. Test data are recorded by a central data collection system scanning 200 channels of data a second every two minutes. Results of the Systems Demonstration Test, a short shakedown test of 500 hours, and facility performance during the first year of testing are presented.
Plancton: an opportunistic distributed computing project based on Docker containers
NASA Astrophysics Data System (ADS)
Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara
2017-10-01
The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.
Propulsion/flight control integration technology (PROFIT) software system definition
NASA Technical Reports Server (NTRS)
Carlin, C. M.; Hastings, W. J.
1978-01-01
The Propulsion Flight Control Integration Technology (PROFIT) program is designed to develop a flying testbed dedicated to controls research. The control software for PROFIT is defined. Maximum flexibility, needed for long term use of the flight facility, is achieved through a modular design. The Host program, processes inputs from the telemetry uplink, aircraft central computer, cockpit computer control and plant sensors to form an input data base for use by the control algorithms. The control algorithms, programmed as application modules, process the input data to generate an output data base. The Host program formats the data for output to the telemetry downlink, the cockpit computer control, and the control effectors. Two applications modules are defined - the bill of materials F-100 engine control and the bill of materials F-15 inlet control.
Computer-Aided Facilities Management Systems (CAFM).
ERIC Educational Resources Information Center
Cyros, Kreon L.
Computer-aided facilities management (CAFM) refers to a collection of software used with increasing frequency by facilities managers. The six major CAFM components are discussed with respect to their usefulness and popularity in facilities management applications: (1) computer-aided design; (2) computer-aided engineering; (3) decision support…
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities
ERIC Educational Resources Information Center
Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David
2005-01-01
Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…
Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF
NASA Astrophysics Data System (ADS)
Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.
2015-12-01
The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.
Multi-modality molecular imaging: pre-clinical laboratory configuration
NASA Astrophysics Data System (ADS)
Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.
2006-02-01
In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.
Issues central to a useful image understanding environment
NASA Astrophysics Data System (ADS)
Beveridge, J. Ross; Draper, Bruce A.; Hanson, Allen R.; Riseman, Edward M.
1992-04-01
A recent DARPA initiative has sparked interested in software environments for computer vision. The goal is a single environment to support both basic research and technology transfer. This paper lays out six fundamental attributes such a system must possess: (1) support for both C and Lisp, (2) extensibility, (3) data sharing, (4) data query facilities tailored to vision, (5) graphics, and (6) code sharing. The first three attributes fundamentally constrain the system design. Support for both C and Lisp demands some form of database or data-store for passing data between languages. Extensibility demands that system support facilities, such as spatial retrieval of data, be readily extended to new user-defined datatypes. Finally, data sharing demands that data saved by one user, including data of a user-defined type, must be readable by another user.
ERIC Educational Resources Information Center
New York State Education Dept., Albany.
PLANNING OUTDOOR PHYSICAL EDUCATION FACILITIES FOR THE CENTRAL SCHOOL SERVING PUPILS FROM KINDERGARTEN THROUGH HIGH SCHOOL SHOULD TAKE INTO ACCOUNT THE NEEDS AND INTERESTS OF ALL PUPILS DURING THE SCHOOL YEAR AND SHOULD PROVIDE FOR RECREATION NEEDS DURING VACATION PERIODS. PROVISION FOR RECREATIONAL FACILITIES FOR ADULTS SHOULD ALSO BE MADE. THE…
Unique life sciences research facilities at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Mulenburg, G. M.; Vasques, M.; Caldwell, W. F.; Tucker, J.
1994-01-01
The Life Science Division at NASA's Ames Research Center has a suite of specialized facilities that enable scientists to study the effects of gravity on living systems. This paper describes some of these facilities and their use in research. Seven centrifuges, each with its own unique abilities, allow testing of a variety of parameters on test subjects ranging from single cells through hardware to humans. The Vestibular Research Facility allows the study of both centrifugation and linear acceleration on animals and humans. The Biocomputation Center uses computers for 3D reconstruction of physiological systems, and interactive research tools for virtual reality modeling. Psycophysiological, cardiovascular, exercise physiology, and biomechanical studies are conducted in the 12 bed Human Research Facility and samples are analyzed in the certified Central Clinical Laboratory and other laboratories at Ames. Human bedrest, water immersion and lower body negative pressure equipment are also available to study physiological changes associated with weightlessness. These and other weightlessness models are used in specialized laboratories for the study of basic physiological mechanisms, metabolism and cell biology. Visual-motor performance, perception, and adaptation are studied using ground-based models as well as short term weightlessness experiments (parabolic flights). The unique combination of Life Science research facilities, laboratories, and equipment at Ames Research Center are described in detail in relation to their research contributions.
ALMA test interferometer control system: past experiences and future developments
NASA Astrophysics Data System (ADS)
Marson, Ralph G.; Pokorny, Martin; Kern, Jeff; Stauffer, Fritz; Perrigouard, Alain; Gustafsson, Birger; Ramey, Ken
2004-09-01
The Atacama Large Millimeter Array (ALMA) will, when it is completed in 2012, be the world's largest millimeter & sub-millimeter radio telescope. It will consist of 64 antennas, each one 12 meters in diameter, connected as an interferometer. The ALMA Test Interferometer Control System (TICS) was developed as a prototype for the ALMA control system. Its initial task was to provide sufficient functionality for the evaluation of the prototype antennas. The main antenna evaluation tasks include surface measurements via holography and pointing accuracy, measured at both optical and millimeter wavelengths. In this paper we will present the design of TICS, which is a distributed computing environment. In the test facility there are four computers: three real-time computers running VxWorks (one on each antenna and a central one) and a master computer running Linux. These computers communicate via Ethernet, and each of the real-time computers is connected to the hardware devices via an extension of the CAN bus. We will also discuss our experience with this system and outline changes we are making in light of our experiences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Sisterson
2010-01-12
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2010 for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208); for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208); and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 x 2,208). The ARM Mobile Facility (AMF) deployment in Graciosa Island, the Azores, Portugal, continues; its OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are the result of downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP locale has historically had a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. Beginning this quarter, the SGP began a transition to a smaller footprint (150 km x 150 km) by rearranging the original and new instrumentation made available through the American Recovery and Reinvestment Act (ARRA). The central facility and 4 extended facilities will remain, but there will be up to 16 surface new characterization facilities, 4 radar facilities, and 3 profiler facilities sited in the smaller domain. This new configuration will provide observations at scales more appropriate to current and future climate models. The TWP locale has the Manus, Nauru, and Darwin sites. These sites will also have expanded measurement capabilities with the addition of new instrumentation made available through ARRA funds. It is anticipated that the new instrumentation at all the fixed sites will be in place within the next 12 months. The AMF continues its 20-month deployment in Graciosa Island, Azores, Portugal, that started May 1, 2009. The AMF will also have additional observational capabilities within the next 12 months. Users can participate in field experiments at the sites and mobile facility, or they can participate remotely. Therefore, a variety of mechanisms are provided to users to access site information. Users who have immediate (real-time) needs for data access can request a research account on the local site data systems. This access is particularly useful to users for quick decisions in executing time-dependent activities associated with field campaigns at the fixed sites and mobile facility locations. The eight computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; the AMF; and the DMF at PNNL. However, users are warned that the data provided at the time of collection have not been fully screened for quality and therefore are not considered to be official ACRF data. Hence, these accounts are considered to be part of the facility activities associated with field campaign activities, and users are tracked. In addition, users who visit sites can connect their computer or instrument to an ACRF site data system network, which requires an on-site device account. Remote (off-site) users can also have remote access to any ACRF instrument or computer system at any ACRF site, which requires an off-site device account. These accounts are also managed and tracked.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.2 Definitions. As used in this part: (a) Agent means an Agent... loan means an advance of funds by an Agent to a member natural person credit union to meet liquidity... or Central Liquidity Facility means the National Credit Union Administration Central Liquidity...
Decentralized School vs. Centralized School. Investigation No. 3.
ERIC Educational Resources Information Center
Paseur, C. Herbert
A report is presented of a comparative investigation of a decentralized and a centralized school facility. Comparative data are provided regarding costs of the facilities, amount of educational area provided by the facilities, and types of educational areas provided. Evaluative comments are included regarding cost savings versus educational…
Code of Federal Regulations, 2011 CFR
2011-10-01
..., Central Office (except Office of Construction and Facilities Management), the National Acquisition Center... facilities, Central Office (except Office of Construction and Facilities Management), the National... takes exception to the accord and satisfaction language VA specifies, assignment of claims, changes to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael G. Lewis
2012-02-01
This report describes conditions, as required by the state of Idaho Wastewater Reuse Permit (LA-000141-03), for the wastewater land application site at Idaho National Laboratory Site's Central Facilities Area Sewage Treatment Plant from November 1, 2010, through October 31, 2011. The report contains the following information: (1) Site description; (2) Facility and system description; (3) Permit required monitoring data and loading rates; (4) Status of special compliance conditions and activities; and (5) Discussion of the facility's environmental impacts. During the 2011 permit year, approximately 1.22 million gallons of treated wastewater was land-applied to the irrigation area at Central Facilities Areamore » Sewage Treatment plant.« less
Target recognition based on the moment functions of radar signatures
NASA Astrophysics Data System (ADS)
Kim, Kyung-Tae; Kim, Hyo-Tae
2002-03-01
In this paper, we present the results of target recognition research based on the moment functions of various radar signatures, such as time-frequency signatures, range profiles, and scattering centers. The proposed approach utilizes geometrical moments or central moments of the obtained radar signatures. In particular, we derived exact and closed form expressions of the geometrical moments of the adaptive Gaussian representation (AGR), which is one of the adaptive joint time-frequency techniques, and also computed the central moments of range profiles and one-dimensional (1-D) scattering centers on a target, which are obtained by various super-resolution techniques. The obtained moment functions are further processed to provide small dimensional and redundancy-free feature vectors, and classified via a neural network approach or a Bayes classifier. The performances of the proposed technique are demonstrated using a simulated radar cross section (RCS) data set, or a measured RCS data set of various scaled aircraft models, obtained at the Pohang University of Science and Technology (POSTECH) compact range facility. Results show that the techniques in this paper can not only provide reliable classification accuracy, but also save computational resources.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mike Lewis
2013-02-01
This report describes conditions, as required by the state of Idaho Wastewater Reuse Permit (#LA-000141-03), for the wastewater land application site at Idaho National Laboratory Site’s Central Facilities Area Sewage Treatment Plant from November 1, 2011, through October 31, 2012. The report contains the following information: • Site description • Facility and system description • Permit required monitoring data and loading rates • Status of compliance conditions and activities • Discussion of the facility’s environmental impacts. During the 2012 permit year, no wastewater was land-applied to the irrigation area of the Central Facilities Area Sewage Treatment Plant.
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
ERIC Educational Resources Information Center
Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.
2006-01-01
Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…
12 CFR 725.6 - Termination of membership.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.6 Termination of membership. (a) A member of... member has failed to comply with any provision of the National Credit Union Central Liquidity Facility...
Three-axis electron-beam test facility
NASA Technical Reports Server (NTRS)
Dayton, J. A., Jr.; Ebihara, B. T.
1981-01-01
An electron beam test facility, which consists of a precision multidimensional manipulator built into an ultra-high-vacuum bell jar, was designed, fabricated, and operated at Lewis Research Center. The position within the bell jar of a Faraday cup which samples current in the electron beam under test, is controlled by the manipulator. Three orthogonal axes of motion are controlled by stepping motors driven by digital indexers, and the positions are displayed on electronic totalizers. In the transverse directions, the limits of travel are approximately + or - 2.5 cm from the center with a precision of 2.54 micron (0.0001 in.); in the axial direction, approximately 15.0 cm of travel are permitted with an accuracy of 12.7 micron (0.0005 in.). In addition, two manually operated motions are provided, the pitch and yaw of the Faraday cup with respect to the electron beam can be adjusted to within a few degrees. The current is sensed by pulse transformers and the data are processed by a dual channel box car averager with a digital output. The beam tester can be operated manually or it can be programmed for automated operation. In the automated mode, the beam tester is controlled by a microcomputer (installed at the test site) which communicates with a minicomputer at the central computing facility. The data are recorded and later processed by computer to obtain the desired graphical presentations.
Development and applications of nondestructive evaluation at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Whitaker, Ann F.
1990-01-01
A brief description of facility design and equipment, facility usage, and typical investigations are presented for the following: Surface Inspection Facility; Advanced Computer Tomography Inspection Station (ACTIS); NDE Data Evaluation Facility; Thermographic Test Development Facility; Radiographic Test Facility; Realtime Radiographic Test Facility; Eddy Current Research Facility; Acoustic Emission Monitoring System; Advanced Ultrasonic Test Station (AUTS); Ultrasonic Test Facility; and Computer Controlled Scanning (CONSCAN) System.
NASA Technical Reports Server (NTRS)
Mogilevsky, M.
1973-01-01
The Category A computer systems at KSC (Al and A2) which perform scientific and business/administrative operations are described. This data division is responsible for scientific requirements supporting Saturn, Atlas/Centaur, Titan/Centaur, Titan III, and Delta vehicles, and includes realtime functions, Apollo-Soyuz Test Project (ASTP), and the Space Shuttle. The work is performed chiefly on the GEL-635 (Al) system located in the Central Instrumentation Facility (CIF). The Al system can perform computations and process data in three modes: (1) real-time critical mode; (2) real-time batch mode; and (3) batch mode. The Division's IBM-360/50 (A2) system, also at the CIF, performs business/administrative data processing such as personnel, procurement, reliability, financial management and payroll, real-time inventory management, GSE accounting, preventive maintenance, and integrated launch vehicle modification status.
Atmospheric Radiation Measurement Program facilities newsletter, July 2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.; Holdridge, D. J., ed.
2000-08-03
For improved safety in and around the ARM SGP CART site, the ARM Program recently purchased and installed an aircraft detection radar system at the central facility near Lamont, Oklahoma. The new system will enhance safety measures already in place at the central facility. The SGP CART site, especially the central facility, houses several instruments employing laser technology. These instruments are designed to be eye-safe and are not a hazard to personnel at the site or pilots of low-flying aircraft over the site. However, some of the specialized equipment brought to the central facility by visiting scientists during scheduled intensivemore » observation periods (IOPs) might use higher-power laser beams that point skyward to make measurements of clouds or aerosols in the atmosphere. If these beams were to strike the eye of a person in an aircraft flying above the instrument, damage to the person's eyesight could result. During IOPs, CART site personnel have obtained Federal Aviation Administration (FAA) approval to temporarily close the airspace directly over the central facility and keep aircraft from flying into the path of the instrument's laser beam. Information about the blocked airspace is easily transmitted to commercial aircraft, but that does not guarantee that the airspace remains completely plane-free. For this reason, during IOPs in which non-eye-safe lasers were in use in the past, ARM technicians watched for low-flying aircraft in and around the airspace over the central facility. If the technicians spotted such an aircraft, they would manually trigger a safety shutter to block the laser beam's path skyward until the plane had cleared the area.« less
Integrated waste management system costs in a MPC system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supko, E.M.
1995-12-01
The impact on system costs of including a centralized interim storage facility as part of an integrated waste management system based on multi-purpose canister (MPC) technology was assessed in analyses by Energy Resources International, Inc. A system cost savings of $1 to $2 billion occurs if the Department of Energy begins spent fuel acceptance in 1998 at a centralized interim storage facility. That is, the savings associated with decreased utility spent fuel management costs will be greater than the cost of constructing and operating a centralized interim storage facility.
Applications of Modeling and Simulation for Flight Hardware Processing at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Marshall, Jennifer L.
2010-01-01
The Boeing Design Visualization Group (DVG) is responsible for the creation of highly-detailed representations of both on-site facilities and flight hardware using computer-aided design (CAD) software, with a focus on the ground support equipment (GSE) used to process and prepare the hardware for space. Throughout my ten weeks at this center, I have had the opportunity to work on several projects: the modification of the Multi-Payload Processing Facility (MPPF) High Bay, weekly mapping of the Space Station Processing Facility (SSPF) floor layout, kinematics applications for the Orion Command Module (CM) hatches, and the design modification of the Ares I Upper Stage hatch for maintenance purposes. The main goal of each of these projects was to generate an authentic simulation or representation using DELMIA V5 software. This allowed for evaluation of facility layouts, support equipment placement, and greater process understanding once it was used to demonstrate future processes to customers and other partners. As such, I have had the opportunity to contribute to a skilled team working on diverse projects with a central goal of providing essential planning resources for future center operations.
FACILITY 847, DETAIL OF A CENTRAL STAIRWAY FROM COURTYARD, QUADRANGLE ...
FACILITY 847, DETAIL OF A CENTRAL STAIRWAY FROM COURTYARD, QUADRANGLE J, VIEW FACING NORTHEAST. - Schofield Barracks Military Reservation, Quadrangles I & J Barracks Type, Between Wright-Smith & Capron Avenues near Williston Avenue, Wahiawa, Honolulu County, HI
ERIC Educational Resources Information Center
Smith, Ernest K.; And Others
The system control facilities in broadband communication systems are discussed in this report. These facilities consist of head-ends and central processors. The first section summarizes technical problems and needs, and the second offers a cursory overview of systems, along with an incidental mention of processors. Section 3 looks at the question…
Executive control systems in the engineering design environment
NASA Technical Reports Server (NTRS)
Hurst, P. W.; Pratt, T. W.
1985-01-01
Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.
Outline of Toshiba Business Information Center
NASA Astrophysics Data System (ADS)
Nagata, Yoshihiro
Toshiba Business Information Center gathers and stores inhouse and external business information used in common within the Toshiba Corp., and provides companywide circulation, reference and other services. The Center established centralized information management system by employing decentralized computers, electronic file apparatus (30cm laser disc) and other office automation equipments. Online retrieval through LAN is available to search the stored documents and increasing copying requests are processed by electronic file. This paper describes the purpose of establishment of the Center, the facilities, management scheme, systematization of the files and the present situation and plan of each information service.
Space station dynamics, attitude control and momentum management
NASA Technical Reports Server (NTRS)
Sunkel, John W.; Singh, Ramen P.; Vengopal, Ravi
1989-01-01
The Space Station Attitude Control System software test-bed provides a rigorous environment for the design, development and functional verification of GN and C algorithms and software. The approach taken for the simulation of the vehicle dynamics and environmental models using a computationally efficient algorithm is discussed. The simulation includes capabilities for docking/berthing dynamics, prescribed motion dynamics associated with the Mobile Remote Manipulator System (MRMS) and microgravity disturbances. The vehicle dynamics module interfaces with the test-bed through the central Communicator facility which is in turn driven by the Station Control Simulator (SCS) Executive. The Communicator addresses issues such as the interface between the discrete flight software and the continuous vehicle dynamics, and multi-programming aspects such as the complex flow of control in real-time programs. Combined with the flight software and redundancy management modules, the facility provides a flexible, user-oriented simulation platform.
FACILITY 847, DETAIL OF A CENTRAL STAIRWELL BETWEEN SECOND AND ...
FACILITY 847, DETAIL OF A CENTRAL STAIRWELL BETWEEN SECOND AND THIRD FLOORS, QUADRANGLE J, VIEW FACING SOUTHEAST. - Schofield Barracks Military Reservation, Quadrangles I & J Barracks Type, Between Wright-Smith & Capron Avenues near Williston Avenue, Wahiawa, Honolulu County, HI
Academic Computing Facilities and Services in Higher Education--A Survey.
ERIC Educational Resources Information Center
Warlick, Charles H.
1986-01-01
Presents statistics about academic computing facilities based on data collected over the past six years from 1,753 institutions in the United States, Canada, Mexico, and Puerto Rico for the "Directory of Computing Facilities in Higher Education." Organizational, functional, and financial characteristics are examined as well as types of…
The grand challenge of managing the petascale facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R. J.; Mathematics and Computer Science
2007-02-28
This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less
The minitrack tracking function description, volume 1
NASA Technical Reports Server (NTRS)
Englar, T. S., Jr.; Mango, S. A.; Roettcher, C. A.; Watters, D. L.
1973-01-01
The treatment of tracking data by the Minitrack system is described from the transmission of the nominal 136-MHz radio beacon energy from a satellite and the reception of this signal by the interferometer network through the ultimate derivation of the direction cosines (the angular coordinates of the vector from the tracking station to the spacecraft) as a function of time. Descriptions of some of the lesser-known functions operating on the system, such as the computer preprocessing program, are included. A large part of the report is devoted to the preprocessor, which provides for the data compression, smoothing, calibration correction, and ambiguity resolution of the raw interferometer phase tracking measurements teletyped from each of the worldwide Minitrack tracking stations to the central computer facility at Goddard Space Flight Center. An extensive bibliography of Minitrack hardware and theory is presented.
Specialized computer architectures for computational aerodynamics
NASA Technical Reports Server (NTRS)
Stevenson, D. K.
1978-01-01
In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.
ERIC Educational Resources Information Center
Siu, Kin Wai Michael; Lam, Mei Seung
2012-01-01
Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…
Public acceptance for centralized storage and repositories of low-level waste session (Panel)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutz, H.R.
1995-12-31
Participants from various parts of the world will provide a summary of their particular country`s approach to low-level waste management and the cost of public acceptance for low-level waste management facilities. Participants will discuss the number, geographic location, and type of low-level waste repositories and centralized storage facilities located in their countries. Each will discuss the amount, distribution, and duration of funds to gain public acceptance of these facilities. Participants will provide an estimated $/meter for centralized storage facilities and repositories. The panel will include a brief discussion about the ethical aspects of public acceptance costs, approaches for negotiating acceptance,more » and lessons learned in each country. The audience is invited to participate in the discussion.« less
Evolution in a centralized transfusion service.
AuBuchon, James P; Linauts, Sandra; Vaughan, Mimi; Wagner, Jeffrey; Delaney, Meghan; Nester, Theresa
2011-12-01
The metropolitan Seattle area has utilized a centralized transfusion service model throughout the modern era of blood banking. This approach has used four laboratories to serve over 20 hospitals and clinics, providing greater capabilities for all at a lower consumption of resources than if each depended on its own laboratory and staff for these functions. In addition, this centralized model has facilitated wider use of the medical capabilities of the blood center's physicians, and a county-wide network of transfusion safety officers is now being developed to increase the impact of the blood center's transfusion expertise at the patient's bedside. Medical expectations and traffic have led the blood center to evolve the centralized model to include on-site laboratories at facilities with complex transfusion requirements (e.g., a children's hospital) and to implement in all the others a system of remote allocation. This new capability places a refrigerator stocked with uncrossmatched units in the hospital but retains control over the dispensing of these through the blood center's computer system; the correct unit can be electronically cross-matched and released on demand, obviating the need for transportation to the hospital and thus speeding transfusion. This centralized transfusion model has withstood the test of time and continues to evolve to meet new situations and ensure optimal patient care. © 2011 American Association of Blood Banks.
CENTRAL FOOD STORE FACILITIES FOR COLLEGES AND UNIVERSITIES.
ERIC Educational Resources Information Center
BLOOMFIELD, BYRON C.
INSPECTION OF A NUMBER OF INSTALLATIONS WAS ORIENTED TOWARD ARCHITECTURAL AND PLANNING QUESTIONS INVOLVING ECONOMICS AND SERVICES OF CENTRAL FOOD STORE FACILITIES. COMMENCING WITH THE PURCHASING PHILOSOPHY WHICH OVERVIEWS THE ORGANIZATION OF FOODS PURCHASING, SELECTION OF PERSONNEL, SPECIFICATIONS FOR PURCHASING, TECHNIQUES FOR PURCHASING, AND…
The Facility Registry System (FRS) is a centrally managed database that identifies facilities, sites or places subject to environmental regulations or of environmental interest. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous...
Status report of the end-to-end ASKAP software system: towards early science operations
NASA Astrophysics Data System (ADS)
Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew
2016-08-01
The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.
Hagos, Goshu; Tura, Gurmesa; Kahsay, Gizienesh; Haile, Kebede; Grum, Teklit; Araya, Tsige
2018-06-05
Abortion remains among the leading causes of maternal death worldwide. Post-abortion contraception is significantly effective in preventing unintended pregnancy and abortion if provided before women leave the health facilty. However, the status of post-abortion family planning (PAFP) utilization and the contributing factors are not well studied in Tigray region. So, we conduct study aimed on family planning utilization and factors associated with it among women receiving abortion services. A facility based cross-sectional study design was conducted among women receiving abortion services in central zone of Tigray from December 2015to February 2016 using a total of 416 sample size. Women who came for abortion services were selected using systematic random sampling technique.. The data were collected using a pre-tested interviewer administered questionnair. Data were coded and entered in to Epi info 7 and then exported to SPSS for analysis. Descriptive statisticslike frequencies and mean were computed to display the results. Both Bivariable and multivariable logistic regression was used in the analysis. Variables statistically significant at p < 0.05 in the bivariable analysis were checked in multivariable logistic regration to identify independently associated factors. Then variables which were significantly associated with post abortion family planning utilization at p-value < 0.05 in the multivariable analysis were declared as significantly associated factors. A total of 409 abortion clients were interviewed in this study with 98.3% of response rate. Majority 290 (70.9%) of study participants utilized contracepives after abortion. Type of health facility, the decision maker on timing of having child, knowledge that pregnancy can happen soon after abortion and husband's opposition towards contraceptives were significantly associated with Post-abortion family planning ustilization. About one-third of abortion women failed to receive contraceptive before leaving the facility. Private facilities should strengthen utilization of contraceptives on post abortion care service. Health providers should provide counseling on timing of fertility-return following abortion before women left the facility once they receive abortion care. Women empowerment through enhancing community's awareness focusing on own decision making in the family planning utilization including the partner should be strengthened.
High-Performance Computing Data Center | Energy Systems Integration
Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing
Multiphasic Health Testing in the Clinic Setting
LaDou, Joseph
1971-01-01
The economy of automated multiphasic health testing (amht) activities patterned after the high-volume Kaiser program can be realized in low-volume settings. amht units have been operated at daily volumes of 20 patients in three separate clinical environments. These programs have displayed economics entirely compatible with cost figures published by the established high-volume centers. This experience, plus the expanding capability of small, general purpose, digital computers (minicomputers) indicates that a group of six or more physicians generating 20 laboratory appraisals per day can economically justify a completely automated multiphasic health testing facility. This system would reside in the clinic or hospital where it is used and can be configured to do analyses such as electrocardiography and generate laboratory reports, and communicate with large computer systems in university medical centers. Experience indicates that the most effective means of implementing these benefits of automation is to make them directly available to the medical community with the physician playing the central role. Economic justification of a dedicated computer through low-volume health testing then allows, as a side benefit, automation of administrative as well as other diagnostic activities—for example, patient billing, computer-aided diagnosis, and computer-aided therapeutics. PMID:4935771
PDS: A Performance Database Server
Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...
1994-01-01
The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less
Replication of Space-Shuttle Computers in FPGAs and ASICs
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.
2008-01-01
A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.
Centralization vs. Decentralization: A Location Analysis Approach for Librarians
ERIC Educational Resources Information Center
Raffel, Jeffrey; Shishko, Robert
1972-01-01
An application of location theory to the question of centralized versus decentralized library facilities for a university, with relevance for special libraries is presented. The analysis provides models for a single library, for two or more libraries, or for decentralized facilities. (6 references) (Author/NH)
Flying a College on the Computer. The Use of the Computer in Planning Buildings.
ERIC Educational Resources Information Center
Saint Louis Community Coll., MO.
Upon establishment of the St. Louis Junior College District, it was decided to make use of computer si"ulation facilities of a nearby aero-space contractor to develop a master schedule for facility planning purposes. Projected enrollments and course offerings were programmed with idealized student-teacher ratios to project facility needs. In…
Quantification of rectifications for the Northwestern University Flexible Sub-Ischial Vacuum Socket.
Fatone, Stefania; Johnson, William Brett; Tran, Lilly; Tucker, Kerice; Mowrer, Christofer; Caldwell, Ryan
2017-06-01
The fit and function of a prosthetic socket depend on the prosthetist's ability to design the socket's shape to distribute load comfortably over the residual limb. We recently developed a sub-ischial socket for persons with transfemoral amputation: the Northwestern University Flexible Sub-Ischial Vacuum Socket. This study aimed to quantify the rectifications required to fit the Northwestern University Flexible Sub-Ischial Vacuum Socket to teach the technique to prosthetists as well as provide a computer-aided design-computer-aided manufacturing option. Development project. A program was used to align scans of unrectified and rectified negative molds and calculate shape change as a result of rectification. Averaged rectifications were used to create a socket template, which was shared with a central fabrication facility engaged in provision of Northwestern University Flexible Sub-Ischial Vacuum Sockets to early clinical adopters. Feedback regarding quality of fitting was obtained. Rectification maps created from 30 cast pairs of successfully fit Northwestern University Flexible Sub-Ischial Vacuum Sockets confirmed that material was primarily removed from the positive mold in the proximal-lateral and posterior regions. The template was used to fabricate check sockets for 15 persons with transfemoral amputation. Feedback suggested that the template provided a reasonable initial fit with only minor adjustments. Rectification maps and template were used to facilitate teaching and central fabrication of the Northwestern University Flexible Sub-Ischial Vacuum Socket. Minor issues with quality of initial fit achieved with the template may be due to inability to adjust the template to patient characteristics (e.g. tissue type, limb shape) and/or the degree to which it represented a fully mature version of the technique. Clinical relevance Rectification maps help communicate an important step in the fabrication of the Northwestern University Flexible Sub-Ischial Vacuum Socket facilitating dissemination of the technique, while the average template provides an alternative fabrication option via computer-aided design-computer-aided manufacturing and central fabrication.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Centralization and Decentralization of Schools' Physical Facilities Management in Nigeria
ERIC Educational Resources Information Center
Ikoya, Peter O.
2008-01-01
Purpose: This research aims to examine the difference in the availability, adequacy and functionality of physical facilities in centralized and decentralized schools districts, with a view to making appropriate recommendations to stakeholders on the reform programmes in the Nigerian education sector. Design/methodology/approach: Principals,…
75 FR 30421 - Central Utah Project Completion Act
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
... facilities of the Wasatch County Water Efficiency Project (WCWEP), Bonneville Unit, Central Utah Project (CUP... conservation and wise use of water, all of which are objectives of the CUP Completion Act. The proposed action would allow recycled water to be conveyed and used in WCWEP facilities and through exchange become CUP...
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
Survey of solar thermal test facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masterson, K.
The facilities that are presently available for testing solar thermal energy collection and conversion systems are briefly described. Facilities that are known to meet ASHRAE standard 93-77 for testing flat-plate collectors are listed. The DOE programs and test needs for distributed concentrating collectors are identified. Existing and planned facilities that meet these needs are described and continued support for most of them is recommended. The needs and facilities that are suitable for testing components of central receiver systems, several of which are located overseas, are identified. The central contact point for obtaining additional details and test procedures for these facilitiesmore » is the Solar Thermal Test Facilities Users' Association in Albuquerque, N.M. The appendices contain data sheets and tables which give additional details on the technical capabilities of each facility. Also included is the 1975 Aerospace Corporation report on test facilities that is frequently referenced in the present work.« less
Meir, Arie; Rubinsky, Boris
2009-01-01
Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236
Meir, Arie; Rubinsky, Boris
2009-11-19
Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people.
CAD/CAM transtibial prosthetic sockets from central fabrication facilities: How accurate are they?
Sanders, Joan E.; Rogers, Ellen L.; Sorenson, Elizabeth A.; Lee, Gregory S.; Abrahamson, Daniel C.
2014-01-01
This research compares transtibial prosthetic sockets made by central fabrication facilities with their corresponding American Academy of Orthotists and Prosthetists (AAOP) electronic shape files and assesses the central fabrication process. We ordered three different socket shapes from each of 10 manufacturers. Then we digitized the sockets using a very accurate custom mechanical digitizer. Results showed that quality varied considerably among the different manufacturers. Four of the companies consistently made sockets within +/−1.1% volume (approximately 1 sock ply) of the AAOP electronic shape file, while six other companies did not. Six of the companies showed consistent undersizing or oversizing in their sockets, which suggests a consistent calibration or manufacturing error. Other companies showed inconsistent sizing or shape distortion, a difficult problem that represents a most challenging limitation for central fabrication facilities. PMID:18247236
Instrument Systems Analysis and Verification Facility (ISAVF) users guide
NASA Technical Reports Server (NTRS)
Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.
1985-01-01
The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.
ERIC Educational Resources Information Center
RENO, MARTIN; AND OTHERS
A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Michael George
This report describes conditions, as required by the state of Idaho Wastewater Reuse Permit (#LA-000141-03), for the wastewater land application site at the Idaho National Laboratory Site’s Central Facilities Area Sewage Treatment Plant from November 1, 2014, through October 31, 2015.
Facilities | Integrated Energy Solutions | NREL
strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed
Experience with a UNIX based batch computing facility for H1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.
1994-12-31
A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mike Lewis
2014-02-01
This report describes conditions, as required by the state of Idaho Wastewater Reuse Permit (#LA-000141-03), for the wastewater land application site at the Idaho National Laboratory Site’s Central Facilities Area Sewage Treatment Plant from November 1, 2012, through October 31, 2013. The report contains, as applicable, the following information: • Site description • Facility and system description • Permit required monitoring data and loading rates • Status of compliance conditions and activities • Discussion of the facility’s environmental impacts. During the 2013 permit year, no wastewater was land-applied to the irrigation area of the Central Facilities Area Sewage Treatment Plantmore » and therefore, no effluent flow volumes or samples were collected from wastewater sampling point WW-014102. However, soil samples were collected in October from soil monitoring unit SU-014101.« less
Aliganyira, Patrick; Kerber, Kate; Davy, Karen; Gamache, Nathalie; Sengendo, Namaala Hanifah; Bergh, Anne-Marie
2014-01-01
Introduction Prematurity is the leading cause of newborn death in Uganda, accounting for 38% of the nation's 39,000 annual newborn deaths. Kangaroo mother care is a high-impact; cost-effective intervention that has been prioritized in policy in Uganda but implementation has been limited. Methods A standardised, cross-sectional, mixed-method evaluation design was used, employing semi-structured key-informant interviews and observations in 11 health care facilities implementing kangaroo mother care in Uganda. Results The facilities visited scored between 8.28 and 21.72 out of the possible 30 points with a median score of 14.71. Two of the 3 highest scoring hospitals were private, not-for-profit hospitals whereas the second highest scoring hospital was a central teaching hospital. Facilities with KMC services are not equally distributed throughout the country. Only 4 regions (Central 1, Central 2, East-Central and Southwest) plus the City of Kampala were identified as having facilities providing KMC services. Conclusion KMC services are not instituted with consistent levels of quality and are often dependent on private partner support. With increasing attention globally and in country, Uganda is in a unique position to accelerate access to and quality of health services for small babies across the country. PMID:25667699
Aliganyira, Patrick; Kerber, Kate; Davy, Karen; Gamache, Nathalie; Sengendo, Namaala Hanifah; Bergh, Anne-Marie
2014-01-01
Prematurity is the leading cause of newborn death in Uganda, accounting for 38% of the nation's 39,000 annual newborn deaths. Kangaroo mother care is a high-impact; cost-effective intervention that has been prioritized in policy in Uganda but implementation has been limited. A standardised, cross-sectional, mixed-method evaluation design was used, employing semi-structured key-informant interviews and observations in 11 health care facilities implementing kangaroo mother care in Uganda. The facilities visited scored between 8.28 and 21.72 out of the possible 30 points with a median score of 14.71. Two of the 3 highest scoring hospitals were private, not-for-profit hospitals whereas the second highest scoring hospital was a central teaching hospital. Facilities with KMC services are not equally distributed throughout the country. Only 4 regions (Central 1, Central 2, East-Central and Southwest) plus the City of Kampala were identified as having facilities providing KMC services. KMC services are not instituted with consistent levels of quality and are often dependent on private partner support. With increasing attention globally and in country, Uganda is in a unique position to accelerate access to and quality of health services for small babies across the country.
7 CFR 3565.206 - Ineligible uses of loan proceeds.
Code of Federal Regulations, 2010 CFR
2010-01-01
... transient residents; (d) Nursing homes, special care facilities and institutional type homes that require licensing as a medical care facility; (e) Operating capital for central dining facilities or for any items...
7 CFR 3565.206 - Ineligible uses of loan proceeds.
Code of Federal Regulations, 2011 CFR
2011-01-01
... transient residents; (d) Nursing homes, special care facilities and institutional type homes that require licensing as a medical care facility; (e) Operating capital for central dining facilities or for any items...
7 CFR 3565.206 - Ineligible uses of loan proceeds.
Code of Federal Regulations, 2014 CFR
2014-01-01
... transient residents; (d) Nursing homes, special care facilities and institutional type homes that require licensing as a medical care facility; (e) Operating capital for central dining facilities or for any items...
7 CFR 3565.206 - Ineligible uses of loan proceeds.
Code of Federal Regulations, 2013 CFR
2013-01-01
... transient residents; (d) Nursing homes, special care facilities and institutional type homes that require licensing as a medical care facility; (e) Operating capital for central dining facilities or for any items...
7 CFR 3565.206 - Ineligible uses of loan proceeds.
Code of Federal Regulations, 2012 CFR
2012-01-01
... transient residents; (d) Nursing homes, special care facilities and institutional type homes that require licensing as a medical care facility; (e) Operating capital for central dining facilities or for any items...
Future Computer Requirements for Computational Aerodynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.
African Braille Production: A Statistical Review and Evaluation of Countries and Costs.
ERIC Educational Resources Information Center
Mayer, Marc; Cylke, Frank Kurt
A study was conducted in 52 African countries to determine the extent of braille facilities for the blind, with the aim of choosing a location for a central braille producing facility. To make the selection, the factors of ease of communication (i.e., central location), political stability, and extent of already existing organizations for the…
ERIC Educational Resources Information Center
Toyn, Thomas David
The author sought to evaluate the feasibility of developing a centralized instructional television (ITV) production facility for institutions of higher learning in the state of Utah. He considered economic factors, availability of qualified personnel, space and physical plant, potential to provide the required service, and the degree of acceptance…
A Brief Study of Cafeteria Facilities and Operations, with Recommendations for Implementation.
ERIC Educational Resources Information Center
Okamura, James T.
The facilities and operations of the school lunch program in the public schools of Hawaii are reviewed. Several types of school lunch programs are described including--(1) traditional school lunch programs, (2) kitchen and classroom dining, (3) central and decentralized dining, (4) home school-feeder school system, (5) central kitchen, and (6) the…
The report, in three parts, describes the characteristics of the Cleveland (OH) area electroplating industry and an approach and design for a centralized facility to treat cyanide and heavy metal wastes generated by this industry. The facility is termed the Resource Recovery Park...
Facilities Management via Computer: Information at Your Fingertips.
ERIC Educational Resources Information Center
Hensey, Susan
1996-01-01
Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)
Situational Lightning Climatologies for Central Florida, Phase 2, Part 3
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2007-01-01
The threat of lightning is a daily concern during the warm season in Florida. The forecasters at the Spaceflight Meteorology Group (SMG) at Johnson Spaceflight Center in Houston, TX consider lightning in their landing forecasts for space shuttles at the Kennedy Space Center (KSC), FL Shuttle Landing Facility (SLF). The forecasters at the National Weather Service in Melbourne, FL (NWS MLB) do the same in their routine Terminal Aerodrome Forecasts (TAFs) for seven airports in the NWS MLB County Warning Area (CWA). The Applied Meteorology Unit created flow regime climatologies of lightning probability in the 5-, 10-, 20-, and 30-n mi circles surrounding the Shuttle Landing Facility (SLF) and all airports in the NWS MLB county warning area in 1-, 3-, and 6-hour increments. The results were presented in tabular and graphical format and incorporated into a web-based graphical user interface so forecasters could easily navigate through the data and to make the GUI usable in any web browser on computers with different operating systems.
LEMON - LHC Era Monitoring for Large-Scale Infrastructures
NASA Astrophysics Data System (ADS)
Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron
2011-12-01
At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.
Fault tolerant computer control for a Maglev transportation system
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George
1994-01-01
Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.
Computational Tools and Facilities for the Next-Generation Analysis and Design Environment
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.
VIEW OF BUILDING 122 EXAMINATION FACILITIES THAT SUPPORT ROUTINE EMPLOYEE ...
VIEW OF BUILDING 122 EXAMINATION FACILITIES THAT SUPPORT ROUTINE EMPLOYEE AND SUBCONTRACTOR PHYSICAL EXAMINATIONS. (10/85) - Rocky Flats Plant, Emergency Medical Services Facility, Southwest corner of Central & Third Avenues, Golden, Jefferson County, CO
Doing Your Science While You're in Orbit
NASA Astrophysics Data System (ADS)
Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.
2010-11-01
Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.
Jack Rabbit Pretest 2021E PT6 Photonic Doppler Velocimetry Data Volume 6 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT6 experiment was fired on April 1, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT6, 160 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on themore » central axis and at 20, 30, 35, 45, 55, 65, 75 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The earliest PDV signal extinction was 54.2 microseconds at 30 millimeters. The latest PDV signal extinction time was 64.5 microseconds at the central axis. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 55 millimeters at 14.1 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1860 meters per second. At 55 millimeters the last measured velocity was 2408 meters per second. The low-to-high velocity ratio was 0.77. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 227 kilobars at 20.1 microseconds, indicating a late time chemical reaction in the LX-17 dead-zone. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 1.7 microseconds.« less
2014 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, James R.; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.
2015 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, James R.; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.
Conceptualization and design of a variable-gravity research facility
NASA Technical Reports Server (NTRS)
1987-01-01
The goal is to provide facilities for the study of the effects of variable-gravity levels in reducing the physiological stresses upon the humans of long-term stay time in zero-g. The designs studied include: twin-tethered two module system with a central despun module with docking port and winch gear; and rigid arm tube facility using shuttle external tanks. Topics examined included: despun central capsule configuration, docking clearances, EVA requirements, crew selection, crew scheduling, food supply and preparation, waste handling, leisure use, biomedical issues, and psycho-social issues.
Computer-socket manufacturing error: How much before it is clinically apparent?
Sanders, Joan E.; Severance, Michael R.; Allyn, Kathryn J.
2015-01-01
The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants’ normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry. PMID:22773260
NASA Astrophysics Data System (ADS)
Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.
2013-10-01
In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drewmark Communications; Sartor, Dale; Wilson, Mark
2010-07-01
High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.
Computer Operating System Maintenance.
1982-06-01
FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access
ELECTRICAL LINES ARRIVE FROM CENTRAL FACILITIES AREA, SOUTH OF MTR. ...
ELECTRICAL LINES ARRIVE FROM CENTRAL FACILITIES AREA, SOUTH OF MTR. EXCAVATION RUBBLE IN FOREGROUND. CONTRACTOR CRAFT SHOPS, CRANES, AND OTHER MATERIALS ON SITE. CAMERA FACES EAST, WITH LITTLE BUTTE AND MIDDLE BUTTE IN DISTANCE. INL NEGATIVE NO. 335. Unknown Photographer, 7/1/1950 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator)
1980-01-01
Major first year accomplishments are summarized and plans are provided for the next 12-month period for a program established by NASA with the Environmental Research Institute of Michigan to investigate methods of making LANDSAT technology readily available to a broader set of private sector firms through local community colleges. The program applies a network where the major participants are NASA, university or research institutes, community colleges, and obtain hands-on training in LANDSAT data analysis techniques, using a desk-top, interactive remote analysis station which communicates with a central computing facility via telephone line, and provides for generation of land cover maps and data products via remote command.
SISSY: An example of a multi-threaded, networked, object-oriented databased application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scipioni, B.; Liu, D.; Song, T.
1993-05-01
The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL`s systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor andmore » analyze the PDSF.« less
NASA Technical Reports Server (NTRS)
Devito, D. M.
1981-01-01
A low-cost GPS civil-user mobile terminal whose purchase cost is substantially an order of magnitude less than estimates for the military counterpart is considered with focus on ground station requirements for position monitoring of civil users requiring this capability and the civil user navigation and location-monitoring requirements. Existing survey literature was examined to ascertain the potential users of a low-cost NAVSTAR receiver and to estimate their number, function, and accuracy requirements. System concepts are defined for low cost user equipments for in-situ navigation and the retransmission of low data rate positioning data via a geostationary satellite to a central computing facility.
Arvand, M; Jungkind, K; Hack, A
2011-04-21
German water guidelines do not recommend routine assessment of cold water for Legionella in healthcare facilities, except if the water temperature at distal sites exceeds 25°C. This study evaluates Legionella contamination in cold and warm water supplies of healthcare facilities in Hesse, Germany, and analyses the relationship between cold water temperature and Legionella contamination. Samples were collected from four facilities, with cases of healthcare-associated Legionnaires' disease or notable contamination of their water supply. Fifty-nine samples were from central lines and 625 from distal sites, comprising 316 cold and 309 warm water samples. Legionella was isolated from central lines in two facilities and from distal sites in four facilities. 17% of all central and 32% of all distal samples were contaminated. At distal sites, cold water samples were more frequently contaminated with Legionella (40% vs 23%, p <0.001) and with higher concentrations of Legionella (≥1,000 colony-forming unit/100 ml) (16% vs 6%, p<0.001) than warm water samples. There was no clear correlation between the cold water temperature at sampling time and the contamination rate. 35% of cold water samples under 20 °C at collection were contaminated. Our data highlight the importance of assessing the cold water supply of healthcare facilities for Legionella in the context of an intensified analysis.
21 CFR 1305.24 - Central processing of orders.
Code of Federal Regulations, 2013 CFR
2013-04-01
... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...
21 CFR 1305.24 - Central processing of orders.
Code of Federal Regulations, 2012 CFR
2012-04-01
... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...
21 CFR 1305.24 - Central processing of orders.
Code of Federal Regulations, 2011 CFR
2011-04-01
... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...
21 CFR 1305.24 - Central processing of orders.
Code of Federal Regulations, 2010 CFR
2010-04-01
... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...
21 CFR 1305.24 - Central processing of orders.
Code of Federal Regulations, 2014 CFR
2014-04-01
... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...
Monitoring techniques and alarm procedures for CMS services and sites in WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.
2012-01-01
The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less
On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Alunni, Antonella I.
2012-01-01
This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.
Health service providers in Somalia: their readiness to provide malaria case-management
Noor, Abdisalan M; Rage, Ismail A; Moonen, Bruno; Snow, Robert W
2009-01-01
Background Studies have highlighted the inadequacies of the public health sector in sub-Saharan African countries in providing appropriate malaria case management. The readiness of the public health sector to provide malaria case-management in Somalia, a country where there has been no functioning central government for almost two decades, was investigated. Methods Three districts were purposively sampled in each of the two self-declared states of Puntland and Somaliland and the south-central region of Somalia, in April-November 2007. A survey and mapping of all public and private health service providers was undertaken. Information was recorded on services provided, types of anti-malarial drugs used and stock, numbers and qualifications of staff, sources of financial support and presence of malaria diagnostic services, new treatment guidelines and job aides for malaria case-management. All settlements were mapped and a semi-quantitative approach was used to estimate their population size. Distances from settlements to public health services were computed. Results There were 45 public health facilities, 227 public health professionals, and 194 private pharmacies for approximately 0.6 million people in the three districts. The median distance to public health facilities was 6 km. 62.3% of public health facilities prescribed the nationally recommended anti-malarial drug and 37.7% prescribed chloroquine as first-line therapy. 66.7% of public facilities did not have in stock the recommended first-line malaria therapy. Diagnosis of malaria using rapid diagnostic tests (RDT) or microscopy was performed routinely in over 90% of the recommended public facilities but only 50% of these had RDT in stock at the time of survey. National treatment guidelines were available in 31.3% of public health facilities recommended by the national strategy. Only 8.8% of the private pharmacies prescribed artesunate plus sulphadoxine/pyrimethamine, while 53.1% prescribed chloroquine as first-line therapy. 31.4% of private pharmacies also provided malaria diagnosis using RDT or microscopy. Conclusion Geographic access to public health sector is relatively low and there were major shortages of appropriate guidelines, anti-malarials and diagnostic tests required for appropriate malaria case management. Efforts to strengthen the readiness of the health sector in Somalia to provide malaria case management should improve availability of drugs and diagnostic kits; provide appropriate information and training; and engage and regulate the private sector to scale up malaria control. PMID:19439097
Health service providers in Somalia: their readiness to provide malaria case-management.
Noor, Abdisalan M; Rage, Ismail A; Moonen, Bruno; Snow, Robert W
2009-05-13
Studies have highlighted the inadequacies of the public health sector in sub-Saharan African countries in providing appropriate malaria case management. The readiness of the public health sector to provide malaria case-management in Somalia, a country where there has been no functioning central government for almost two decades, was investigated. Three districts were purposively sampled in each of the two self-declared states of Puntland and Somaliland and the south-central region of Somalia, in April-November 2007. A survey and mapping of all public and private health service providers was undertaken. Information was recorded on services provided, types of anti-malarial drugs used and stock, numbers and qualifications of staff, sources of financial support and presence of malaria diagnostic services, new treatment guidelines and job aides for malaria case-management. All settlements were mapped and a semi-quantitative approach was used to estimate their population size. Distances from settlements to public health services were computed. There were 45 public health facilities, 227 public health professionals, and 194 private pharmacies for approximately 0.6 million people in the three districts. The median distance to public health facilities was 6 km. 62.3% of public health facilities prescribed the nationally recommended anti-malarial drug and 37.7% prescribed chloroquine as first-line therapy. 66.7% of public facilities did not have in stock the recommended first-line malaria therapy. Diagnosis of malaria using rapid diagnostic tests (RDT) or microscopy was performed routinely in over 90% of the recommended public facilities but only 50% of these had RDT in stock at the time of survey. National treatment guidelines were available in 31.3% of public health facilities recommended by the national strategy. Only 8.8% of the private pharmacies prescribed artesunate plus sulphadoxine/pyrimethamine, while 53.1% prescribed chloroquine as first-line therapy. 31.4% of private pharmacies also provided malaria diagnosis using RDT or microscopy. Geographic access to public health sector is relatively low and there were major shortages of appropriate guidelines, anti-malarials and diagnostic tests required for appropriate malaria case management. Efforts to strengthen the readiness of the health sector in Somalia to provide malaria case management should improve availability of drugs and diagnostic kits; provide appropriate information and training; and engage and regulate the private sector to scale up malaria control.
2016 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Jim; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.
NASA Astrophysics Data System (ADS)
Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.
2009-07-01
The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.
EPA FRS Facilities Combined File CSV Download for the Marshall Islands
The Facility Registry System (FRS) identifies facilities, sites, or places subject to environmental regulation or of environmental interest to EPA programs or delegated states. Using vigorous verification and data management procedures, FRS integrates facility data from program national systems, state master facility records, tribal partners, and other federal agencies and provides the Agency with a centrally managed, single source of comprehensive and authoritative information on facilities.
EPA FRS Facilities Single File CSV Download for the Marshall Islands
The Facility Registry System (FRS) identifies facilities, sites, or places subject to environmental regulation or of environmental interest to EPA programs or delegated states. Using vigorous verification and data management procedures, FRS integrates facility data from program national systems, state master facility records, tribal partners, and other federal agencies and provides the Agency with a centrally managed, single source of comprehensive and authoritative information on facilities.
Neilson, Christine J
2010-01-01
The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.
2004-02-19
KENNEDY SPACE CENTER, FLA. - NASA Administrator Sean O’Keefe (center) is welcomed to the Central Florida Research Park, near Orlando. Central Florida leaders are proposing the research park as the site for the new NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
A large-scale computer facility for computational aerodynamics
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Ballhaus, W. F., Jr.
1985-01-01
As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.
NIF ICCS network design and loading analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tietbohl, G; Bryant, R
The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Mike
This renewal application for a Recycled Water Reuse Permit is being submitted in accordance with the Idaho Administrative Procedures Act 58.01.17 “Recycled Water Rules” and the Municipal Wastewater Reuse Permit LA-000141-03 for continuing the operation of the Central Facilities Area Sewage Treatment Plant located at the Idaho National Laboratory. The permit expires March 16, 2015. The permit requires a renewal application to be submitted six months prior to the expiration date of the existing permit. For the Central Facilities Area Sewage Treatment Plant, the renewal application must be submitted by September 16, 2014. The information in this application is consistentmore » with the Idaho Department of Environmental Quality’s Guidance for Reclamation and Reuse of Municipal and Industrial Wastewater and discussions with Idaho Department of Environmental Quality personnel.« less
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Facility Registry Service (FRS)
This is a centrally managed database that identifies facilities either subject to environmental regulations or of environmental interest, providing an integrated source of air, water, and waste environmental data.
NASA Technical Reports Server (NTRS)
Cottrell, Dinna L.
2011-01-01
The Stennis Space Center (SSC) Records Retention Facility is a centralized location for all SSC records, Records Management staff, and the SSC History Office. The building is a storm resistant facility and provides a secure environment for records housing. The Records Retention Facility was constructed in accordance with The National Archives and Records Administration (NARA) requirements for records storage, making it the first NARA compliant facility in the agency. Stennis Space Center's Records Retention Facility became operational in May 2010. The SSC Records Retention Facility ensures that the required federal records are preserved, managed and accessible to all interested personnel. The facility provides 20,000 cubic feet of records storage capacity for the purpose of managing the centers consolidated records within a central, protected environment. Records housed in the facility are in the form of paper, optical, film and magnetic media. Located within the SSC Records Retention Facility, the Records Management Office provides comprehensive records management services in the form of: a) Storage and life-cycle management of inactive records of all media types; b) Digitizing/scanning of records and documents; c) Non-textual/digital electronic records media storage, migration and transfer; d) Records Remediation.
NASA Astrophysics Data System (ADS)
Warner, N. R.; Menio, E. C.; Landis, J. D.; Vengosh, A.; Lauer, N.; Harkness, J.; Kondash, A.
2014-12-01
Recent public interest in high volume slickwater hydraulic fracturing (HVHF) has drawn increased interest in wastewater management practices by the public, researchers, industry, and regulators. The management of wastes, including both fluids and solids, poses many engineering challenges, including elevated total dissolved solids and elevated activities of naturally occurring radioactive materials (NORM). One management option for wastewater in particular, which is used in western Pennsylvania, USA, is treatment at centralized waste treatment facilities [1]. Previous studies conducted from 2010-2012 indicated that one centralized facility, the Josephine Brine Treatment facility, removed the majority of radium from produced water and hydraulic fracturing flowback fluid (HFFF) during treatment, but low activities of radium remained in treated effluent and were discharged to surface water [2]. Despite the treatment process and radium reduction, high activities (200 times higher than upstream/background) accumulated in stream sediments at the point of effluent discharge. Here we present new results from sampling conducted at two additional centralized waste treatment facilities (Franklin Brine Treatment and Hart Brine Treatment facilities) and Josephine Brine Treatment facility conducted in June 2014. Preliminary results indicate radium is released to surface water at very low (<50 pCi/L) to non-detectable activities, however; radium continues to accumulate in sediments surrounding the area of effluent release. Combined, the data indicate that 1) radium continues to be released to surface water streams in western Pennsylvania despite oil and gas operators voluntary ban on treatment and disposal of HFFF in centralized waste treatment facilities, 2) radium accumulation in sediments occurred at multiple brine treatment facilities and is not isolated to a single accidental release of contaminants or a single facility. [1] Wilson, J. M. and J. M. VanBriesen (2012). "Oil and Gas Produced Water Management and Surface Drinking Water Sources in Pennsylvania." Environmental Practice 14(04): 288-300. [2] Warner, N. R., C. A. Christie, R. B. Jackson and A. Vengosh (2013). "Impacts of Shale Gas Wastewater Disposal on Water Quality in Western Pennsylvania." ES&T 47(20): 11849-11857.
Grid Computing at GSI for ALICE and FAIR - present and future
NASA Astrophysics Data System (ADS)
Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten
2012-12-01
The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darrow, Ken; Hedman, Bruce
Data centers represent a rapidly growing and very energy intensive activity in commercial, educational, and government facilities. In the last five years the growth of this sector was the electric power equivalent to seven new coal-fired power plants. Data centers consume 1.5% of the total power in the U.S. Growth over the next five to ten years is expected to require a similar increase in power generation. This energy consumption is concentrated in buildings that are 10-40 times more energy intensive than a typical office building. The sheer size of the market, the concentrated energy consumption per facility, and themore » tendency of facilities to cluster in 'high-tech' centers all contribute to a potential power infrastructure crisis for the industry. Meeting the energy needs of data centers is a moving target. Computing power is advancing rapidly, which reduces the energy requirements for data centers. A lot of work is going into improving the computing power of servers and other processing equipment. However, this increase in computing power is increasing the power densities of this equipment. While fewer pieces of equipment may be needed to meet a given data processing load, the energy density of a facility designed to house this higher efficiency equipment will be as high as or higher than it is today. In other words, while the data center of the future may have the IT power of ten data centers of today, it is also going to have higher power requirements and higher power densities. This report analyzes the opportunities for CHP technologies to assist primary power in making the data center more cost-effective and energy efficient. Broader application of CHP will lower the demand for electricity from central stations and reduce the pressure on electric transmission and distribution infrastructure. This report is organized into the following sections: (1) Data Center Market Segmentation--the description of the overall size of the market, the size and types of facilities involved, and the geographic distribution. (2) Data Center Energy Use Trends--a discussion of energy use and expected energy growth and the typical energy consumption and uses in data centers. (3) CHP Applicability--Potential configurations, CHP case studies, applicable equipment, heat recovery opportunities (cooling), cost and performance benchmarks, and power reliability benefits (4) CHP Drivers and Hurdles--evaluation of user benefits, social benefits, market structural issues and attitudes toward CHP, and regulatory hurdles. (5) CHP Paths to Market--Discussion of technical needs, education, strategic partnerships needed to promote CHP in the IT community.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, W.
1998-12-01
This report estimates the economic and financial effects and the benefits of compliance with the proposed effluent limitations guidelines and standards for the Centralized Waste Treatment (CWT) industry. The Environmental Protection Agency (EPA) has measured these impacts in terms of changes in the profitability of waste treatment operations at CWT facilities, changes in market prices to CWT services, and changes in the quantities of waste management at CWT facilities in six geographic regions. EPA has also examined the impacts on companies owning CWT facilities (including impacts on small entities), on communities in which CWT facilities are located, and on environmentalmore » justice. EPA examined the benefits to society of the CWT effluent limitations guidelines and standards by examining cancer and non-cancer health effects of the regulation, recreational benefits, and cost savings to publicly owned treatment works (POTWs) to which indirect-discharging CWT facilities send their wastewater.« less
Distributed health care imaging information systems
NASA Astrophysics Data System (ADS)
Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.
1997-05-01
We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.
ERIC Educational Resources Information Center
Enriquez, Judith Guevarra
2010-01-01
In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…
The UK Human Genome Mapping Project online computing service.
Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W
1992-04-01
This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.
Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.
Trudgian, David C; Mirzaei, Hamid
2012-12-07
We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.
32 CFR 1906.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2012 CFR
2012-07-01
... INTELLIGENCE AGENCY ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE CENTRAL INTELLIGENCE AGENCY § 1906.150 Program accessibility: Existing facilities. (a...
32 CFR 1906.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2013 CFR
2013-07-01
... INTELLIGENCE AGENCY ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE CENTRAL INTELLIGENCE AGENCY § 1906.150 Program accessibility: Existing facilities. (a...
32 CFR 1906.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2014 CFR
2014-07-01
... INTELLIGENCE AGENCY ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE CENTRAL INTELLIGENCE AGENCY § 1906.150 Program accessibility: Existing facilities. (a...
32 CFR 1906.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2011 CFR
2011-07-01
... INTELLIGENCE AGENCY ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE CENTRAL INTELLIGENCE AGENCY § 1906.150 Program accessibility: Existing facilities. (a...
32 CFR 1906.150 - Program accessibility: Existing facilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... INTELLIGENCE AGENCY ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE CENTRAL INTELLIGENCE AGENCY § 1906.150 Program accessibility: Existing facilities. (a...
Computational Science at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Romero, Nichols
2014-03-01
The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.
Wireless remote monitoring of critical facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Hanchung; Anderson, John T.; Liu, Yung Y.
A method, apparatus, and system are provided for monitoring environment parameters of critical facilities. A Remote Area Modular Monitoring (RAMM) apparatus is provided for monitoring environment parameters of critical facilities. The RAMM apparatus includes a battery power supply and a central processor. The RAMM apparatus includes a plurality of sensors monitoring the associated environment parameters and at least one communication module for transmitting one or more monitored environment parameters. The RAMM apparatus is powered by the battery power supply, controlled by the central processor operating a wireless sensor network (WSN) platform when the facility condition is disrupted. The RAMM apparatusmore » includes a housing prepositioned at a strategic location, for example, where a dangerous build-up of contamination and radiation may preclude subsequent manned entrance and surveillance.« less
The utilization of intravenous therapy programs in community long-term care nursing facilities.
Weinberg, A D; Pals, J K; Wei, J Y
1997-01-01
To determine if non-federal Boston-area long-term care nursing facilities are actively using intravenous (IV) therapy as a form of treatment, the specific design of such programs and to assess the availability of central line IVs, percutaneous endoscopic gastrostomy (PEG) tubes and hypodermoclysis for hydration in this setting. DESIGN/SETTINGS: A prospective telephone survey of 100 Boston-area skilled nursing facilities, each with a minimum of 50 beds and representing a total of 12,763 beds, certified to provide both Medicaid (Title-19) and Medicare services, to ascertain their ability to provide IV and other modes of hydration for their residents. A series of questions were asked of a member of the staff knowledgeable in the operations of the nursing facility. Questions included whether an IV program was in existence, duration of the program, provider of IV training for nurses, presence of a subacute unit, whether IVs were administered in non-subacute areas, frequency of IV usage, the ability to manage central lines and the use of PEG tubes or hypodermoclysis for hydration. A total of 100 nursing facilities were surveyed between September and October of 1996. A total of 79 nursing facilities had active IV programs (79%) and 54 of those (68%) also managed central lines. However, in those facilities with active IV programs, 73% (N = 58) reported administering a total of less than five IVs per month. Training for 82% of the nursing facilities (N = 65) was by an outside vendor pharmacy and initial training ranged from one to three days in duration. Of the 19 nursing facilities with IV programs available only in subacute or equivalent units, only 26% (N = 5) did not allow direct transfer of residents from other wards into these units. Of the 79 nursing facilities having IV capability, a total of 91% (N = 72) have also used PEG tubes for hydration and nutritional needs although only 6% (N = 5) have ever used hypodermoclysis for hydration. The majority of nursing facilities in the Boston area provide IV programs for their residents, although in limited numbers on a monthly basis. Residents with central lines are admitted in the majority of these nursing facilities although total staff training time is only one to three days. The use of PEG tubes for hydration is quite frequent, although the use of hypodermoclysis was extremely low. Further work is necessary to fully elucidate the clinical implications of whether these programs decrease the need for acute hospitalization or are used mainly in the post-hospitalization (Medicare A-covered) period.
Corrosion Control of Central Vehicle Wash Facility Pump Components Using Alternative Alloy Coatings
2016-07-01
military installations are es- sential for supporting the readiness of tactical vehicles. Steel wash-rack pumps are vulnerable to accelerated...Management Command (IMCOM). The technical monitors were Daniel J. Dunmire (OUSD(AT&L)), Bernie Rodriguez (IMPW-FM), and Valerie D. Hines (DAIM-ODF...statement Large steel water pumps are used to pump water into the Central Vehicle Wash Facility (CVWF) for vehicle washing at Fort Polk, LA. The interior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fliermans, C.B.; Hazen, T.C.; Bledsoe, H.
1993-10-01
The contamination of subsurface terrestrial environments by organic contaminants is a global phenomenon. The remediation of such environments requires innovative assessment techniques and strategies for successful clean-ups. Central Shops Diesel Storage Facility at Savannah River Site was characterized to determine the extent of subsurface diesel fuel contamination using innovative approaches and effective bioremediation techniques for clean-up of the contaminant plume have been established.
2014-09-12
CAPE CANAVERAL, Fla. – Inside the Horizontal Integration Facility at Space Launch Complex 37 at Cape Canaveral Air Force Station in Florida, preparations are underway to mate the second stage of a Delta IV Heavy rocket to the central core booster of the three booster stages for the unpiloted Exploration Flight Test-1, or EFT-1. During the mission, Orion will travel farther into space than any human spacecraft has gone in more than 40 years. The data gathered during the flight will influence design decisions, validate existing computer models and innovative new approaches to space systems development, as well as reduce overall mission risks and costs for later Orion flights. Liftoff of Orion on the first flight test is planned for December 2014. Photo credit: NASA/Daniel Casper
Control of a solar-energy-supplied electrical-power system without intermediate circuitry
NASA Astrophysics Data System (ADS)
Leistner, K.
A computer control system is developed for electric-power systems comprising solar cells and small numbers of users with individual centrally controlled converters (and storage facilities when needed). Typical system structures are reviewed; the advantages of systems without an intermediate network are outlined; the demands on a control system in such a network (optimizing generator working point and power distribution) are defined; and a flexible modular prototype system is described in detail. A charging station for lead batteries used in electric automobiles is analyzed as an example. The power requirements of the control system (30 W for generator control and 50 W for communications and distribution control) are found to limit its use to larger networks.
Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center
NASA Astrophysics Data System (ADS)
Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.
2012-12-01
Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron; Shank, James; Ernst, Michael
Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
NASA Astrophysics Data System (ADS)
Venkateswarlu, P.
2017-07-01
Reforms in undergraduate engineering curriculum to produce engineers with entrepreneurial skills should address real-world problems relevant to industry and society with active industry support. Technology-assisted, hands-on projects involving experimentation, design simulation and prototyping will transform graduates into professionals with necessary skills to create and advance knowledge that meets global standards. To achieve this goal, this paper proposes establishing a central facility, 'Centre for Engineering Experimentation and Design Simulation' (CEEDS) in autonomous engineering colleges in India. The centre will be equipped with the most recent technology resources and computational facilities where students execute novel interdisciplinary product-oriented projects benefiting both industry and society. Students undertake two projects: a short-term project aimed at an engineering solution to a problem in energy, health and environment and the other a major industry-supported project devoted to a product that enhances innovation and creativity. The paper presents the current status, the theoretical and pedagogical foundation for the centre's relevance, an activity plan and its implementation in the centre for product-based learning with illustrative examples.
Slew maneuvers on the SCOLE Laboratory Facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.
1987-01-01
The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.
LBNL Computational ResearchTheory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012
Yelick, Kathy
2018-01-24
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yelick, Kathy
2012-02-02
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012
Yelick, Kathy
2017-12-09
Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.
12 CFR 725.18 - Creditworthiness.
Code of Federal Regulations, 2010 CFR
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.18 Creditworthiness. (a) Prior to Facility... advances for its liquidity needs. [44 FR 49437, Aug. 23, 1979, as amended at 69 FR 27829, May 17, 2004] ...
Jack Rabbit Pretest 2021E PT7 Photonic Doppler Velocimetry Data Volume 7 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT7 experiment was fired on April 3, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT7, 160 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on themore » central axis and at 20, 30, 35, 45, 55, 65, 75 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The PDV earliest signal extinction was 50.7 microseconds at 45 millimeters. The latest PDV signal extinction time was 65.0 microseconds at 20 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 55 millimeters and at 15.2 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1447 meters per second. At 65 millimeters the last measured velocity was 2360 meters per second. The low-to-high velocity ratio was 0.61. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 49 kilobars at 23.3 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 4.6 microseconds.« less
Simulation Of Seawater Intrusion With 2D And 3D Models: Nauru Island Case Study
NASA Astrophysics Data System (ADS)
Ghassemi, F.; Jakeman, A. J.; Jacobson, G.; Howard, K. W. F.
1996-03-01
With the advent of large computing capacities during the past few decades, sophisticated models have been developed for the simulation of seawater intrusion in coastal and island aquifers. Currently, several models are commercially available for the simulation of this problem. This paper describes the mathematical basis and application of the SUTRA and HST3D models to simulate seawater intrusion in Nauru Island, in the central Pacific Ocean. A comparison of the performance and limitations of these two models in simulating a real problem indicates that three-dimensional simulation of seawater intrusion with the HST3D model has the major advantage of being able to specify natural boundary conditions as well as pumping stresses. However, HST3D requires a small grid size and short time steps in order to maintain numerical stability and accuracy. These requirements lead to solution of a large set of linear equations that requires the availability of powerful computing facilities in terms of memory and computing speed. Combined results of the two simulation models indicate a safe pumping rate of 400 m3/d for the aquifer on Nauru Island, where additional fresh water is presently needed for the rehabilitation of mined-out land.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dearing, J.F.
The Subchannel Analysis of Blockages in Reactor Elements (SABRE) computer code, developed by the United Kingdom Atomic Energy Authority, is currently the only practical tool available for performing detailed analyses of velocity and temperature fields in the recirculating flow regions downstream of blockages in liquid-metal fast breeder reactor (LMFBR) pin bundles. SABRE is a subchannel analysis code; that is, it accurately represents the complex geometry of nuclear fuel pins arranged on a triangular lattice. The results of SABRE computational models are compared here with temperature data from two out-of-pile 19-pin test bundles from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) Facility atmore » Oak Ridge National Laboratory. One of these bundles has a small central flow blockage (bundle 3A), while the other has a large edge blockage (bundle 5A). Values that give best agreement with experiment for the empirical thermal mixing correlation factor, FMIX, in SABRE are suggested. These values of FMIX are Reynolds-number dependent, however, indicating that the coded turbulent mixing correlation is not appropriate for wire-wrap pin bundles.« less
Ethics and the 7 `P`s` of computer use policies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, T.J.; Voss, R.B.
1994-12-31
A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.
2004-02-19
KENNEDY SPACE CENTER, FLA. - NASA Administrator Sean O’Keefe (left) greets U.S. Representative Ric Keller during a tour of the Central Florida Research Park, near Orlando. Central Florida leaders are proposing the research park as the site for the new NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
2004-02-19
KENNEDY SPACE CENTER, FLA. - NASA Administrator Sean O’Keefe (right) greets Florida Congressman Tom Feeney during a tour of the Central Florida Research Park, near Orlando. Central Florida leaders are proposing the research park as the site for the new NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
ARN Integrated Retail Module (IRM) & 3D Whole Body Scanner System at Fort Carson, Colorado
2006-12-01
the Central Issue Facility (CIF), Ft. Carson, CO; and, 4) Develop and validate dynamic local tariffs. Additional information on Apparel...Scanner; 3) Integrate 3D Whole Body scanning technology with the ARN Integrated Retail Module (IRM) for clothing issue at the Central Issue Facility ...CIF), Ft. Carson, CO; and, 4) Develop and validate dynamic local tariffs. The main goals of the ARN 3D scanning research initiative at the Ft
78 FR 44972 - Notice of Lodging of Proposed Consent Decree Under the Clean Air Act
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-25
... (``Title V'') and the regulations promulgated thereunder at 40 CFR part 71, at a facility known as PLA-9 Central Deliver Point, also known as PLA-9 CDP (the ``PLA-9 Facility''). The PLA-9 Facility is located... Indian Reservation. The PLA-9 Facility is now shut down. The Decree requires Williams pay a $63,000 civil...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, Jill; Corones, James; Batchelor, Donald
Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less
[For the history of creation of central anatomicopathological facility of the Ministry of Defence].
Chirskiĭ, V S; Sibirev, S A; Bushurov, S E
2012-12-01
The system of anatomicopathological facilities was created in 30s of XX century and first years of the Great Patriotic War. The goal of this system was to increase the effectiveness of Sanitary Corps of the Red Army. These anatomicopathological facilities analyzed causes of death of injured soldiers during all stages of the system of treatment-evacuative support of troops and mistakes made by medical specialists during first aid treatment. Organisational forms of anatomicopathological activity were changed and developed according to acquired battle experience. The main stage of formation of anatomicopathological service of the Red Army, and in fact the finished period in organisational formation of anatomicopathological service, was establishment of Central anatomicopathological facility - main methodological, organisational, coordinating and monioring center of anatomicopathological activity of the Armed Forces of the Russian Federation.
Expanding the Scope of High-Performance Computing Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uram, Thomas D.; Papka, Michael E.
The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.
Asah, Flora
2013-04-01
This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.
EPA Facility Registry System (FRS): NEPT
This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Environmental Performance Track (NEPT) Program dataset. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs
EPA Facility Registry Service (FRS): NEI
This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the National Emissions Inventory (NEI) Program dataset. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs
An overview of current activities at the National Solar Thermal Test Facility
NASA Astrophysics Data System (ADS)
Cameron, C. P.; Klimas, P. C.
This paper is a description of the United States Department of Energy's National Solar Thermal Test Facility, highlighting current test programs. In the central receiver area, research underway supports commercialization of molten nitrate salt technology, including receivers, thermal energy transport, and corrosion experiments. Concentrator research includes large-area, glass-metal heliostats and stretched-membrane heliostats and dishes. Test activities in support of dish-Stirling systems with reflux receivers are described. Research on parabolic troughs includes characterization of several receiver configurations. Other test facility activities include solar detoxification experiments, design assistance testing of commercially-available solar hardware, and non-DOE-funded work, including thermal exposure tests and testing of volumetric and PV central receiver concepts.
MIP models for connected facility location: A theoretical and computational study☆
Gollowitzer, Stefan; Ljubić, Ivana
2011-01-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
Eisenlohr, William Stewart; Stewart, J.E.
1952-01-01
During the night of August 4-5, 1943, a violent thunderstorm of unusual intensity occurred in parts of Braxton, Calhoun, Gilmer, Ritchie, and Wirth Counties in the Little Kanawha River Basin in central West Virginia. Precipitation amounted to as much as 15 inches in 2 hours in some sections. As a result, many small streams and a reach of the Little Kanawha River in the vicinity of Burnsville and Gilmer reached the highest stages known. Computations based on special surveys made at suitable sites on representative small streams in the areas of intense flooding indicate that peak discharges closely approach 50 percent of the Jarvis scale. Twenty-three lives were lost on the small tributaries as numerous homes were swept away by the flood, which developed with incredible rapidity during the early morning hours. Damage estimated at $1,300,000 resulted to farm buildings, crops, land, livestock, railroads, highways, and gas- and oil-producing facilities. Considerable permanent land damage resulted from erosion and deposition of sand and gravel.
Adopting a corporate perspective on databases. Improving support for research and decision making.
Meistrell, M; Schlehuber, C
1996-03-01
The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.
The Fermilab Accelerator control system
NASA Astrophysics Data System (ADS)
Bogert, Dixon
1986-06-01
With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.
Computer graphics and the graphic artist
NASA Technical Reports Server (NTRS)
Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.
1985-01-01
A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
View looking SE inside Electrical Shop Central of Georgia ...
View looking SE inside Electrical Shop - Central of Georgia Railway, Savannah Repair Shops & Terminal Facilities, Electrical Shop, Bounded by West Broad, Jones, West Boundary & Hull Streets, Savannah, Chatham County, GA
9. Smoke flue coming through Roundhouse roof. Central of ...
9. Smoke flue coming through Roundhouse roof. - Central of Georgia Railway, Savannah Repair Shops & Terminal Facilities, Roundhouse, Site Bounded by West Broad, Jones, West Boundary & Hull, Savannah, Chatham County, GA
The OSG Open Facility: an on-ramp for opportunistic scientific computing
NASA Astrophysics Data System (ADS)
Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.
2017-10-01
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.
The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayatilaka, B.; Levshina, T.; Sehgal, C.
The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less
FACILITY 713, HALLWAY LOOKING TOWARDS MASTER BEDROOM END, VIEW FACING ...
FACILITY 713, HALLWAY LOOKING TOWARDS MASTER BEDROOM END, VIEW FACING NORTH. - Schofield Barracks Military Reservation, Central-Entry Single-Family Housing Type, Between Bragg & Grime Streets near Ayres Avenue, Wahiawa, Honolulu County, HI
12 CFR 725.23 - Other advances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.23 Other advances. (a) The NCUA Board may authorize extensions of credit to members of the Facility for purposes other than liquidity needs if the NCUA Board, the Board of...
Centralization Versus Decentralization: A Location Analysis Approach for Librarians.
ERIC Educational Resources Information Center
Shishko, Robert; Raffel, Jeffrey
One of the questions that seems to perplex many university and special librarians is whether to move in the direction of centralizing or decentralizing the library's collections and facilities. Presented is a theoretical approach, employing location theory, to the library centralization-decentralization question. Location theory allows the analyst…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
Refurbishment and Automation of the Thermal/Vacuum Facilities at the Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Donohue, John T.; Johnson, Chris; Ogden, Rick; Sushon, Janet
1998-01-01
The thermal/vacuum facilities located at the Goddard Space Flight Center (GSFC) have supported both manned and unmanned space flight since the 1960s. Of the 11 facilities, currently 10 of the systems are scheduled for refurbishment and/or replacement as part of a 5-year implementation. Expected return on investment includes the reduction in test schedules, improvements in the safety of facility operations, reduction in the complexity of a test and the reduction in personnel support required for a test. Additionally, GSFC will become a global resource renowned for expertise in thermal engineering, mechanical engineering and for the automation of thermal/vacuum facilities and thermal/vacuum tests. Automation of the thermal/vacuum facilities includes the utilization of Programmable Logic Controllers (PLCs) and the use of Supervisory Control and Data Acquisition (SCADA) systems. These components allow the computer control and automation of mechanical components such as valves and pumps. In some cases, the chamber and chamber shroud require complete replacement while others require only mechanical component retrofit or replacement. The project of refurbishment and automation began in 1996 and has resulted in the computer control of one Facility (Facility #225) and the integration of electronically controlled devices and PLCs within several other facilities. Facility 225 has been successfully controlled by PLC and SCADA for over one year. Insignificant anomalies have occurred and were resolved with minimal impact to testing and operations. The amount of work remaining to be performed will occur over the next four to five years. Fiscal year 1998 includes the complete refurbishment of one facility, computer control of the thermal systems in two facilities, implementation of SCADA and PLC systems to support multiple facilities and the implementation of a Database server to allow efficient test management and data analysis.
24. INTERIOR OF CENTRAL ROOM. BASE POWER PANEL VISIBLE ON ...
24. INTERIOR OF CENTRAL ROOM. BASE POWER PANEL VISIBLE ON RIGHT WALL OF HALLWAY. - Chollas Heights Naval Radio Transmitting Facility, Transmitter Building, 6410 Zero Road, San Diego, San Diego County, CA
Code of Federal Regulations, 2014 CFR
2014-07-01
... Secretary, has waived certain requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U... process known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodges, G.
2005-03-18
There are currently twenty-four Multi-Filter Rotating Shadowband Radiometers (MFRSR) operating within Atmospheric Radiation Measurement (ARM). Eighteen are located within the Southern Great Plains (SGP) region, there is one at each of the North Slope of Alaska (NSA) and Tropical Western Pacific (TWP) sites, and one is part of the instrumentation of the ARM Mobile Facility. At this time there are four sites, all extended facilities within the SGP, that are equipped for a MFRSR but do not have one due to instrument failure and a lack of spare instruments. In addition to the MFRSRs, there are three other MFRSR derivedmore » instruments that ARM operates. They are the Multi-Filter Radiometer (MFR), the Normal Incidence Multi-Filter Radiometer (NIMFR) and the Narrow Field of View (NFOV) radiometer. All are essentially just the head of a MFRSR used in innovative ways. The MFR is mounted on a tower and pointed at the surface. At the SGP Central Facility there is one at ten meters and one at twenty-five meters. The NSA has a MFR at each station, both at the ten meter level. ARM operates three NIMFRs; one is at the SGP Central Facility and one at each of the NSA stations. There are two NFOVs, both at the SGP Central Facility. One is a single channel (870) and the other utilizes two channels (673 and 870).« less
A Bioinformatics Facility for NASA
NASA Technical Reports Server (NTRS)
Schweighofer, Karl; Pohorille, Andrew
2006-01-01
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Designing Facilities for Collaborative Operations
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana
2003-01-01
A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.
The Ames Power Monitoring System
NASA Technical Reports Server (NTRS)
Osetinsky, Leonid; Wang, David
2003-01-01
The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.
FACILITY 712, EXTERIOR DETAIL OF FIREPLACE AND LEADEDGLASS WINDOWS, VIEW ...
FACILITY 712, EXTERIOR DETAIL OF FIREPLACE AND LEADED-GLASS WINDOWS, VIEW FACING WEST. - Schofield Barracks Military Reservation, Central-Entry Single-Family Housing Type, Between Bragg & Grime Streets near Ayres Avenue, Wahiawa, Honolulu County, HI
Planning and Designing School Computer Facilities. Interim Report.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Finance and Administration Div.
This publication provides suggestions and considerations that may be useful for school jurisdictions developing facilities for computers in schools. An interim report for both use and review, it is intended to assist school system planners in clarifying the specifications needed by the architects, other design consultants, and purchasers involved.…
Molecular Modeling and Computational Chemistry at Humboldt State University.
ERIC Educational Resources Information Center
Paselk, Richard A.; Zoellner, Robert W.
2002-01-01
Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…
Deployment and early experience with remote-presence patient care in a community hospital.
Petelin, J B; Nelson, M E; Goodman, J
2007-01-01
The introduction of the RP6 (InTouch Health, Santa Barbara, CA, USA) remote-presence "robot" appears to offer a useful telemedicine device. The authors describe the deployment and early experience with the RP6 in a community hospital and provided a live demonstration of the system on April 16, 2005 during the Emerging Technologies Session of the 2005 SAGES Meeting in Fort Lauderdale, Florida. The RP6 is a 5-ft 4-in. tall, 215-pound robot that can be remotely controlled from an appropriately configured computer located anywhere on the Internet (i.e., on this planet). The system is composed of a control station (a computer at the central station), a mechanical robot, a wireless network (at the remote facility: the hospital), and a high-speed Internet connection at both the remote (hospital) and central locations. The robot itself houses a rechargeable power supply. Its hardware and software allows communication over the Internet with the central station, interpretation of commands from the central station, and conversion of the commands into mechanical and nonmechanical actions at the remote location, which are communicated back to the central station over the Internet. The RP6 system allows the central party (e.g., physician) to control the movements of the robot itself, see and hear at the remote location (hospital), and be seen and heard at the remote location (hospital) while not physically there. Deployment of the RP6 system at the hospital was accomplished in less than a day. The wireless network at the institution was already in place. The control station setup time ranged from 1 to 4 h and was dependent primarily on the quality of the Internet connection (bandwidth) at the remote locations. Patients who visited with the RP6 on their discharge day could be discharged more than 4 h earlier than with conventional visits, thereby freeing up hospital beds on a busy med-surg floor. Patient visits during "off hours" (nights and weekends) were three times more efficient than conventional visits during these times (20 min per visit vs 40-min round trip travel + 20-min visit). Patients and nursing personnel both expressed tremendous satisfaction with the remote-presence interaction. The authors' early experience suggests a significant benefit to patients, hospitals, and physicians with the use of RP6. The implications for future development are enormous.
Central Facilities Area Sewage Lagoon Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giesbrecht, Alan
2015-03-01
The Central Facilities Area (CFA) located in Butte County, Idaho at Idaho National Laboratory (INL) has an existing wastewater system to collect and treat sanitary wastewater and non contact cooling water from the facility. The existing treatment facility consists of three cells: Cell 1 has a surface area of 1.7 acres, Cell 2 has a surface area of 10.3 acres, and Cell 3 has a surface area of 0.5 acres. If flows exceed the evaporative capacity of the cells, wastewater is discharged to a 73.5 acre land application site that utilizes a center pivot irrigation sprinkler system. The purpose ofmore » this current study is to update the analysis and conclusions of the December 2013 study. In this current study, the new seepage rate and influent flow rate data have been used to update the calculations, model, and analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewett, R.
1997-12-31
This paper describes the strategy and computer processing system that NREL, the Virginia Department of Mines, Minerals and Energy (DMME) and the state energy office, are developing for computing solar attractiveness scores for state agencies and the individual facilities or buildings within each agency. In the case of an agency, solar attractiveness is a measure of that agency`s having a significant number of facilities for which solar has the potential to be promising. In the case of a facility, solar attractiveness is a measure of its potential for being good, economically viable candidate for a solar waste heating system. Virginiamore » State agencies are charged with reducing fossil energy and electricity use and expense. DMME is responsible for working with them to achieve the goals and for managing the state`s energy consumption and cost monitoring program. This is done using the Fast Accounting System for Energy Reporting (FASER) computerized energy accounting and tracking system and database. Agencies report energy use and expenses (by individual facility and energy type) to DMME quarterly. DMME is also responsible for providing technical and other assistance services to agencies and facilities interested in investigating use of solar. Since Virginia has approximately 80 agencies operating over 8,000 energy-consuming facilities and since DMME`s resources are limited, it is interested in being able to determine: (1) on which agencies to focus; (2) specific facilities on which to focus within each high-priority agency; and (3) irrespective of agency, which facilities are the most promising potential candidates for solar. The computer processing system described in this paper computes numerical solar attractiveness scores for the state`s agencies and the individual facilities using the energy use and cost data in the FASER system database and the state`s and NREL`s experience in implementing, testing and evaluating solar water heating systems in commercial and government facilities.« less
Measuring School Facility Conditions: An Illustration of the Importance of Purpose
ERIC Educational Resources Information Center
Roberts, Lance W.
2009-01-01
Purpose: The purpose of this paper is to argue that taking the educational purposes of schools into account is central to understanding the place and importance of facilities to learning outcomes. The paper begins by observing that the research literature connecting facility conditions to student outcomes is mixed. A closer examination of this…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-28
... authorizes the operation of CBR's existing in situ uranium recovery (ISR) facility near Crawford, Nebraska... operate a satellite ISR facility, the Marsland Expansion Area (MEA) site, which is located in Dawes County, Nebraska, some eleven miles to the southeast of CBR's Crawford central processing facility. In response to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...
Code of Federal Regulations, 2011 CFR
2011-07-01
... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...
Code of Federal Regulations, 2013 CFR
2013-07-01
... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...
Radar Detection Models in Computer Supported Naval War Games
1979-06-08
revealed a requirement for the effective centralized manage- ment of computer supported war game development and employment in the U.S. Navy. A...considerations and supports the requirement for centralized Io 97 management of computerized war game development . Therefore it is recommended that a central...managerial and fiscal authority be estab- lished for computerized tactical war game development . This central authority should ensure that new games
ERIC Educational Resources Information Center
Poos, Bradley W.
2015-01-01
Central High School in Kansas City, Missouri is one of the oldest schools west of the Mississippi and the first public high school built in Kansas City. Kansas City's magnet plan resulted in Central High School being rebuilt as the Central Computers Unlimited/Classical Greek Magnet High School, a school that was designed to offer students an…
2014-09-12
CAPE CANAVERAL, Fla. – Inside the Horizontal Integration Facility at Space Launch Complex 37 at Cape Canaveral Air Force Station in Florida, a United Launch Alliance technician on a scissor lift monitors the progress as the second stage of a Delta IV Heavy rocket is mated to the central core booster of the three booster stages for the unpiloted Exploration Flight Test-1, or EFT-1. During the mission, Orion will travel farther into space than any human spacecraft has gone in more than 40 years. The data gathered during the flight will influence design decisions, validate existing computer models and innovative new approaches to space systems development, as well as reduce overall mission risks and costs for later Orion flights. Liftoff of Orion on the first flight test is planned for December 2014. Photo credit: NASA/Daniel Casper
2014-09-12
CAPE CANAVERAL, Fla. – Inside the Horizontal Integration Facility at Space Launch Complex 37 at Cape Canaveral Air Force Station in Florida, United Launch Alliance technicians monitor the progress as the second stage of a Delta IV Heavy rocket is mated to the central core booster of the three booster stages for the unpiloted Exploration Flight Test-1, or EFT-1. During the mission, Orion will travel farther into space than any human spacecraft has gone in more than 40 years. The data gathered during the flight will influence design decisions, validate existing computer models and innovative new approaches to space systems development, as well as reduce overall mission risks and costs for later Orion flights. Liftoff of Orion on the first flight test is planned for December 2014. Photo credit: NASA/Daniel Casper
2014-09-12
CAPE CANAVERAL, Fla. – Inside the Horizontal Integration Facility at Space Launch Complex 37 at Cape Canaveral Air Force Station in Florida, a United Launch Alliance technician on a scissor lift monitors the progress as the second stage of a Delta IV Heavy rocket is mated to the central core booster of the three booster stages for the unpiloted Exploration Flight Test-1, or EFT-1. During the mission, Orion will travel farther into space than any human spacecraft has gone in more than 40 years. The data gathered during the flight will influence design decisions, validate existing computer models and innovative new approaches to space systems development, as well as reduce overall mission risks and costs for later Orion flights. Liftoff of Orion on the first flight test is planned for December 2014. Photo credit: NASA/Daniel Casper
An easily implemented static condensation method for structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.
1990-01-01
A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.
NASA Technical Reports Server (NTRS)
1976-01-01
System specifications to be used by the mission control center (MCC) for the shuttle orbital flight test (OFT) time frame were described. The three support systems discussed are the communication interface system (CIS), the data computation complex (DCC), and the display and control system (DCS), all of which may interfere with, and share processing facilities with other applications processing supporting current MCC programs. The MCC shall provide centralized control of the space shuttle OFT from launch through orbital flight, entry, and landing until the Orbiter comes to a stop on the runway. This control shall include the functions of vehicle management in the area of hardware configuration (verification), flight planning, communication and instrumentation configuration management, trajectory, software and consumables, payloads management, flight safety, and verification of test conditions/environment.
Retrievable payload carrier, next generation Long Duration Exposure Facility: Update 1992
NASA Technical Reports Server (NTRS)
Perry, A. T.; Cagle, J. A.; Newman, S. C.
1993-01-01
Access to space and cost have been two major inhibitors of low Earth orbit research. The Retrievable Payload Carrier (RPC) Program is a commercial space program which strives to overcome these two barriers to space experimentation. The RPC Program's fleet of spacecraft, ground communications station, payload processing facility, and experienced integration and operations team will provide a convenient 'one-stop shop' for investigators seeking to use the unique vantage point and environment of low Earth orbit for research. The RPC is a regularly launched and retrieved, free-flying spacecraft providing resources adequate to meet modest payload/experiment requirements, and presenting ample surface area, volume, mass, and growth capacity for investigator usage. Enhanced capabilities of ground communications, solar-array-supplied electrical power, central computing, and on-board data storage pick up on the path where NASA's Long Duration Exposure Facility (LDEF) blazed the original technology trail. Mission lengths of 6-18 months, or longer, are envisioned. The year 1992 was designated as the 'International Space Year' and coincides with the 500th anniversary of Christopher Columbus's voyage to the New World. This is a fitting year in which to launch the full scale development of our unique shop of discovery whose intent is to facilitate retrieving technological rewards from another new world: space. Presented is an update on progress made on the RPC Program's development since the November 1991 LDEF Materials Workshop.
Ray, N J; Hannigan, A
1999-05-01
As dental practice management becomes more computer-based, the efficient functioning of the dentist will become dependent on adequate computer literacy. A survey has been carried out into the computer literacy of a cohort of 140 undergraduate dental students at a University Dental School in Ireland (years 1-5), in the academic year 1997-98. Aspects investigated by anonymous questionnaire were: (1) keyboard skills; (2) computer skills; (3) access to computer facilities; (4) software competencies and (5) use of medical library computer facilities. The students are relatively unfamiliar with basic computer hardware and software: 51.1% considered their expertise with computers as "poor"; 34.3% had taken a formal typewriting or computer keyboarding course; 7.9% had taken a formal computer course at university level and 67.2% were without access to computer facilities at their term-time residences. A majority of students had never used either word-processing, spreadsheet, or graphics programs. Programs relating to "informatics" were more popular, such as literature searching, accessing the Internet and the use of e-mail which represent the major use of the computers in the medical library. The lack of experience with computers may be addressed by including suitable computing courses at the secondary level (age 13-18 years) and/or tertiary level (FE/HE) education programmes. Such training may promote greater use of generic softwares, particularly in the library, with a more electronic-based approach to data handling.
Wong, Bonny Yee-Man; Cerin, Ester; Ho, Sai-Yin; Mak, Kwok-Kei; Lo, Wing-Sze; Lam, Tai-Hing
2010-04-01
To examine the independent, competing, and interactive effects of perceived availability of specific types of media in the home and neighborhood sport facilities on adolescents' leisure-time physical activity (PA). Survey data from 34 369 students in 42 Hong Kong secondary schools were collected (2006-07). Respondents reported moderate-to-vigorous leisure-time PA, presence of sport facilities in the neighborhood and of media equipment in the home. Being sufficiently physically active was defined as engaging in at least 30 minutes of non-school leisure-time PA on a daily basis. Logistic regression and post-estimation linear combinations of regression coefficients were used to examine the independent and competing effects of sport facilities and media equipment on leisure-time PA. Perceived availability of sport facilities was positively (OR(boys) = 1.17; OR(girls) = 1.26), and that of computer/Internet negatively (OR(boys) = 0.48; OR(girls) = 0.41), associated with being sufficiently active. A significant positive association between video game console and being sufficiently active was found in girls (OR(girls) = 1.19) but not in boys. Compared with adolescents without sport facilities and media equipment, those who reported sport facilities only were more likely to be physically active (OR(boys) = 1.26; OR(girls) = 1.34), while those who additionally reported computer/Internet were less likely to be physically active (OR(boys) = 0.60; OR(girls) = 0.54). Perceived availability of sport facilities in the neighborhood may positively impact on adolescents' level of physical activity. However, having computer/Internet may cancel out the effects of active opportunities in the neighborhood. This suggests that physical activity programs for adolescents need to consider limiting the access to computer-mediated communication as an important intervention component.
Description and operational status of the National Transonic Facility computer complex
NASA Technical Reports Server (NTRS)
Boyles, G. B., Jr.
1986-01-01
This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.
The OSG open facility: A sharing ecosystem
Jayatilaka, B.; Levshina, T.; Rynge, M.; ...
2015-12-23
The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less
ERIC Educational Resources Information Center
Teicholz, Eric
1997-01-01
Reports research on trends in computer-aided facilities management using the Internet and geographic information system (GIS) technology for space utilization research. Proposes that facility assessment software holds promise for supporting facility management decision making, and outlines four areas for its use: inventory; evaluation; reporting;…
Peripherally inserted central catheters. Guidewire versus nonguidewire use: a comparative study.
Loughran, S C; Edwards, S; McClure, S
1992-01-01
To date, no research articles have been published that explore the practice of using guidewires for placement of peripherally inserted central catheters. The literature contains speculations regarding the pros and cons of guidewire use. However, no studies to date have compared patient outcomes when peripherally inserted central catheter lines are inserted with and without guidewires. To examine the use of guidewires for peripherally inserted central lines, a comparative study was conducted at two acute care facilities, one using guidewires for insertion and one inserting peripherally inserted central catheter lines without guidewires. 109 catheters were studied between January 1, 1990 and January 1, 1991. The primary focus of this study was to examine whether guidewire use places patients at higher risk for catheter-related complications, particularly phlebitis. No significant differences in phlebitis rates between the two study sites were found. Other catheter-related and noncatheter-related complications were similar between the two facilities. The results of this study do not support the belief that guidewire use increases complication rates.
State University of New York. Central Administration Costs. Report 92-S-104.
ERIC Educational Resources Information Center
New York State Office of the Comptroller, Albany. Div. of Management Audit.
An evaluation was done of State University of New York (SUNY) Central Administration costs by comparing them to peer systems and by evaluating how economically its duties were carried out. Central Administration provides oversight and executive leadership to the system and manages budgeting, accounting, capital facilities, student affairs and…
2004-02-19
KENNEDY SPACE CENTER, FLA. - KSC Director Jim Kennedy (center) makes a presentation to NASA and other officials about the benefits of locating NASA’s new Shared Services Center in the Central Florida Research Park, near Orlando. Central Florida leaders are proposing the research park as the site for the NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
2004-02-19
KENNEDY SPACE CENTER, FLA. - NASA Administrator Sean O’Keefe (center) listens to Congressman Tom Feeney (second from left) during a tour of the Central Florida Research Park, near Orlando. At right is U.S. Congressman Dave Weldon. Central Florida leaders are proposing the research park as the site for the new NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony
2015-12-01
Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampley, C.M.
1979-01-01
An updated version of the SKYSHINE Monte Carlo procedure has been developed. The new computer code, SKYSHINE-II, provides a substantial increase in versatility in that the program possesses the ability to address three types of point-isotropic radiation sources: (1) primary gamma rays, (2) neutrons, and (3) secondary gamma rays. In addition, the emitted radiation may now be characterized by an energy emission spectrum product of a new energy-dependent atmospheric transmission data base developed by Radiation Research Associates, Inc. for each of the three source types described above. Most of the computational options present in the original program have been retainedmore » in the new version. Hence, the SKYSHINE-II computer code provides a versatile and viable tool for the analysis of the radiation environment in the vicinity of a building structure containing radiation sources, situated within the confines of a nuclear power plant. This report describes many of the calculational methods employed within the SKYSHINE-II program. A brief description of the new data base is included. Utilization instructions for the program are provided for operation of the SKYSHINE-II code on the Brookhaven National Laboratory Central Scientific Computing Facility. A listing of the source decks, block data routines, and the new atmospheric transmission data base are provided in the appendices of the report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Richard P.
2017-07-01
Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.
Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert
2015-07-28
Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.
ERIC Educational Resources Information Center
Stifle, Jack
The PLATO IV computer-based instructional system consists of a large scale centrally located CDC 6400 computer and a large number of remote student terminals. This is a brief and general description of the proposed input/output hardware necessary to interface the student terminals with the computer's central processing unit (CPU) using available…
Detail of bricked up storage vault opening Central of ...
Detail of bricked up storage vault opening - Central of Georgia Railway, Savannah Repair Shops & Terminal Facilities, Brick Storage Vaults under Jones Street, Bounded by West Broad, Jones, West Boundary & Hull Streets, Savannah, Chatham County, GA
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
Guidance on the Stand Down, Mothball, and Reactivation of Ground Test Facilities
NASA Technical Reports Server (NTRS)
Volkman, Gregrey T.; Dunn, Steven C.
2013-01-01
The development of aerospace and aeronautics products typically requires three distinct types of testing resources across research, development, test, and evaluation: experimental ground testing, computational "testing" and development, and flight testing. Over the last twenty plus years, computational methods have replaced some physical experiments and this trend is continuing. The result is decreased utilization of ground test capabilities and, along with market forces, industry consolidation, and other factors, has resulted in the stand down and oftentimes closure of many ground test facilities. Ground test capabilities are (and very likely will continue to be for many years) required to verify computational results and to provide information for regimes where computational methods remain immature. Ground test capabilities are very costly to build and to maintain, so once constructed and operational it may be desirable to retain access to those capabilities even if not currently needed. One means of doing this while reducing ongoing sustainment costs is to stand down the facility into a "mothball" status - keeping it alive to bring it back when needed. Both NASA and the US Department of Defense have policies to accomplish the mothball of a facility, but with little detail. This paper offers a generic process to follow that can be tailored based on the needs of the owner and the applicable facility.
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj
2006-01-01
A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.
A central storage facility to reduce pesticide suicides--a feasibility study from India.
Vijayakumar, Lakshmi; Jeyaseelan, Lakshmanan; Kumar, Shuba; Mohanraj, Rani; Devika, Shanmugasundaram; Manikandan, Sarojini
2013-09-16
Pesticide suicides are considered the single most important means of suicide worldwide. Centralized pesticide storage facilities have the possible advantage of delaying access to pesticides thereby reducing suicides. We undertook this study to examine the feasibility and acceptability of a centralized pesticide storage facility as a preventive intervention strategy in reducing pesticide suicides. A community randomized controlled feasibility study using a mixed methods approach involving a household survey; focus group discussions (FGDs) and surveillance were undertaken. The study was carried out in a district in southern India. Eight villages that engaged in floriculture were identified. Using the lottery method two were randomized to be the intervention sites and two villages constituted the control site. Two centralized storage facilities were constructed with local involvement and lockable storage boxes were constructed. The household survey conducted at baseline and one and a half years later documented information on sociodemographic data, pesticide usage, storage and suicides. At baseline 4446 individuals (1097 households) in the intervention and 3307 individuals (782 households) in the control sites were recruited while at follow up there were 4308 individuals (1063 households) in the intervention and 2673 individuals (632 households) in the control sites. There were differences in baseline characteristics and imbalances in the prevalence of suicides between intervention and control sites as this was a small feasibility study.The results from the FGDs revealed that most participants found the storage facility to be both useful and acceptable. In addition to protecting against wastage, they felt that it had also helped prevent pesticide suicides as the pesticides stored here were not as easily and readily accessible. The primary analyses were done on an Intention to Treat basis. Following the intervention, the differences between sites in changes in combined, completed and attempted suicide rates per 100,000 person-years were 295 (95% CI: 154.7, 434.8; p < 0.001) for pesticide suicide and 339 (95% CI: 165.3, 513.2, p < 0.001) for suicide of all methods. Suicide by pesticides poisoning is a major public health problem and needs innovative interventions to address it. This study, the first of its kind in the world, examined the feasibility of a central storage facility as a means of limiting access to pesticides and, has provided preliminary results on its usefulness. These results need to be interpreted with caution in view of the imbalances between sites. The facility was found to be acceptable, thereby underscoring the need for larger studies for a longer duration. ISRCTN04912407.
Mohanraj, Rani; Kumar, Shuba; Manikandan, Sarojini; Kannaiyan, Veerapandian; Vijayakumar, Lakshmi
2014-08-01
Widespread use of pesticides among farmers in rural India, provides an easy means for suicide. A public health initiative involving storage of pesticides in a central storage facility could be a possible strategy for reducing mortality and morbidity related to pesticide poisoning. This qualitative study explored community perceptions towards a central pesticide storage facility in villages in rural South India. Sixteen focus group discussions held with consenting adults from intervention and control villages were followed by eight more a year after initiation of the storage facility. Analysis revealed four themes, namely, reasons for committing suicide and methods used, exposure to pesticides and first-aid practices, storage and disposal of pesticides, and perceptions towards the storage facility. The facility was appreciated as a means of preventing suicides and for providing a safe haven for pesticide storage. The participatory process that guided its design, construction and location ensured its acceptability. Use of qualitative methods helped provide deep insights into the phenomenon of pesticide suicide and aided the understanding of community perceptions towards the storage facility. The study suggests that communal storage of pesticides could be an important step towards reducing pesticide suicides in rural areas.
Green Supercomputing at Argonne
Beckman, Pete
2018-02-07
Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputingâeverything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/
Harrington, Susan S.; Walker, Bonnie L.
2010-01-01
Background Older adults in small residential board and care facilities are at a particularly high risk of fire death and injury because of their characteristics and environment. Methods The authors investigated computer-based instruction as a way to teach fire emergency planning to owners, operators, and staff of small residential board and care facilities. Participants (N = 59) were randomly assigned to a treatment or control group. Results Study participants who completed the training significantly improved their scores from pre- to posttest when compared to a control group. Participants indicated on the course evaluation that the computers were easy to use for training (97%) and that they would like to use computers for future training courses (97%). Conclusions This study demonstrates the potential for using interactive computer-based training as a viable alternative to instructor-led training to meet the fire safety training needs of owners, operators, and staff of small board and care facilities for the elderly. PMID:19263929
ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean; Potok, Thomas E.; Jones, Todd
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less
Central Plateau Cleanup at DOE's Hanford Site - 12504
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowell, Jonathan
The discussion of Hanford's Central Plateau includes significant work in and around the center of the Hanford Site - located about 7 miles from the Columbia River. The Central Plateau is the area to which operations will be shrunk in 2015 when River Corridor cleanup is complete. This work includes retrieval and disposal of buried waste from miles of trenches; the cleanup and closure of massive processing canyons; the clean-out and demolition to 'slab on grade' of the high-hazard Plutonium Finishing Plant; installation of key groundwater treatment facilities to contain and shrink plumes of contaminated groundwater; demolition of all othermore » unneeded facilities; and the completion of decisions about remaining Central Plateau waste sites. A stated goal of EM has been to shrink the footprint of active cleanup to less than 10 square miles by 2020. By the end of FY2011, Hanford will have reduced the active footprint of cleanup by 64 percent exceeding the goal of 49 percent. By 2015, Hanford will reduce the active footprint of cleanup by more than 90 percent. The remaining footprint reduction will occur between 2015 and 2020. The Central Plateau is a 75-square-mile region near the center of the Hanford Site including the area designated in the Hanford Comprehensive Land Use Plan Environmental Impact Statement (DOE 1999) and Record of Decision (64 FR 61615) as the Industrial-Exclusive Area, a rectangular area of about 20 square miles in the center of the Central Plateau. The Industrial-Exclusive Area contains the 200 East and 200 West Areas that have been used primarily for Hanford's nuclear fuel processing and waste management and disposal activities. The Central Plateau also encompasses the 200 Area CERCLA National Priorities List site. The Central Plateau has a large physical inventory of chemical processing and support facilities, tank systems, liquid and solid waste disposal and storage facilities, utility systems, administrative facilities, and groundwater monitoring wells. As a companion to the Hanford Site Cleanup Completion Framework document, DOE issued its draft Central Plateau Cleanup Completion Strategy in September 2009 to provide an outline of DOE's vision for completion of cleanup activities across the Central Plateau. As major elements of the Hanford cleanup along the Columbia River Corridor near completion, DOE believed it appropriate to articulate the agency vision for the remainder of the cleanup mission. The Central Plateau Cleanup Completion Strategy and the Hanford Site Cleanup Completion Framework were provided to the regulatory community, the Tribal Nations, political leaders, the public, and Hanford stakeholders to promote dialogue on Hanford's future. The Central Plateau Cleanup Completion Strategy describes DOE's vision for completion of Central Plateau cleanup and outlines the decisions needed to achieve the vision. The Central Plateau strategy involves steps to: (1) contain and remediate contaminated groundwater, (2) implement a geographic cleanup approach that guides remedy selection from a plateau-wide perspective, (3) evaluate and deploy viable treatment methods for deep vadose contamination to provide long-term protection of the groundwater, and (4) conduct essential waste management operations in coordination with cleanup actions. The strategy will also help optimize Central Plateau readiness to use funding when it is available upon completion of River Corridor cleanup projects. One aspect of the Central Plateau strategy is to put in place the process to identify the final footprint for permanent waste management and containment of residual contamination within the 20-square-mile Industrial-Exclusive Area. The final footprint identified for permanent waste management and containment of residual contamination should be as small as practical and remain under federal ownership and control for as long as a potential hazard exists. Outside the final footprint, the remainder of the Central Plateau will be available for other uses consistent with the Hanford Comprehensive Land-Use Plan (DOE 1999), while maintained under federal ownership and control. (author)« less
ISTP Science Data Systems and Products
NASA Astrophysics Data System (ADS)
Mish, William H.; Green, James L.; Reph, Mary G.; Peredo, Mauricio
1995-02-01
The International Solar-Terrestrial Physics (ISTP) program will provide simultaneous coordinated scientific measurements from most of the major areas of geospace including specific locations on the Earth's surface. This paper describes the comprehensive ISTP ground science data handling system which has been developed to promote optimal mission planning and efficient data processing, analysis and distribution. The essential components of this ground system are the ISTP Central Data Handling Facility (CDHF), the Information Processing Division's Data Distribution Facility (DDF), the ISTP/Global Geospace Science (GGS) Science Planning and Operations Facility (SPOF) and the NASA Data Archive and Distribution Service (NDADS). The ISTP CDHF is the one place in the program where measurements from this wide variety of geospace and ground-based instrumentation and theoretical studies are brought together. Subsequently, these data will be distributed, along with ancillary data, in a unified fashion to the ISTP Principal Investigator (PI) and Co-Investigator (CoI) teams for analysis on their local systems. The CDHF ingests the telemetry streams, orbit, attitude, and command history from the GEOTAIL, WIND, POLAR, SOHO, and IMP-8 Spacecraft; computes summary data sets, called Key Parameters (KPs), for each scientific instrument; ingests pre-computed KPs from other spacecraft and ground basel investigations; provides a computational platform for parameterized modeling; and provides a number of ‘data services” for the ISTP community of investigators. The DDF organizes the KPs, decommutated telemetry, and associated ancillary data into products for duistribution to the ISTP community on CD-ROMs. The SPOF is the component of the GGS program responsible for the development and coordination of ISTP science planning operations. The SPOF operates under the direction of the ISTP Project Scientist and is responsible for the development and coordination of the science plan for ISTP spacecraft. Instrument command requests for the WIND and POLAR investigations are submitted by the PIs to the SPOF where they are checked for science conflicts, forwarded to the GSFC Command Management Syntem/Payload Operations Control Center (CMS/POCC) for engineering conflict validation, and finally incorporated into the conflict-free science operations plan. Conflict resolution is accomplished through iteration between the PIs, SPOF and CMS and in consultation with the Project Scientist when necessary. The long term archival of ISTP KP and level-zero data will be undertaken by NASA's National Space Science Data Center using the NASA Data Archive and Distribution Service (NDADS). This on-line archive facility will provide rapid access to archived KPs and event data and includes security features to restrict access to the data during the time they are proprietary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brasseur, James G.
The central aims of the DOE-supported “Cyber Wind Facility” project center on the recognition that wind turbines over land and ocean generate power from atmospheric winds that are inherently turbulent and strongly varying, both spatially over the rotor disk and in temporally as the rotating blades pass through atmospheric eddies embedded within the mean wind. The daytime unstable atmospheric boundary layer (ABL) is particularly variable in space time as solar heating generates buoyancy-driven motions that interact with strong mean shear in the ABL “surface layer,” the lowest 200 - 300 m where wind turbines reside in farms. With the “Cybermore » Wind Facility” (CWF) program we initiate a research and technology direction in which “cyber data” are generated from “computational experiments” within a “facility” akin to a wind tunnel, but with true space-time atmospheric turbulence that drive utility-scale wind turbines at full-scale Reynolds numbers. With DOE support we generated the key “modules” within a computational framework to create a first generation Cyber Wind Facility (CWF) for single wind turbines in the daytime ABL---both over land where the ABL globally unstable and over water with closer-to-neutral atmospheric conditions but with time response strongly affected by wave-induced forcing of the wind turbine platform (here a buoy configuration). The CWF program has significantly improved the accuracy of actuator line models, evaluated with the Cyber Wind Facility in full blade-boundary-layer-resolved mode. The application of the CWF made in this program showed the existence of important ramp-like response events that likely contribute to bearing fatigue failure on the main shaft and that the advanced ALM method developed here captures the primary nonsteady response characteristics. Long-time analysis uncovered distinctive key dynamics that explain primary mechanisms that underlie potentially deleterious load transients. We also showed that blade bend-twist coupling plays a central role in the elastic responses of the blades to atmospheric turbulence, impacting turbine power.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Mike
This report describes conditions, as required by the state of Idaho Wastewater Reuse Permit (#LA-000141-03), for the wastewater land application site at the Idaho National Laboratory Site’s Central Facilities Area Sewage Treatment Plant from November 1, 2013, through October 31, 2014. The report contains, as applicable, the following information; Site description; Facility and system description; Permit required monitoring data and loading rates; Status of compliance conditions and activities; and Discussion of the facility’s environmental impacts. The current permit expires on March 16, 2015. A permit renewal application was submitted to Idaho Department of Environmental Quality on September 15, 2014. Duringmore » the 2014 permit year, no wastewater was land-applied to the irrigation area of the Central Facilities Area Sewage Treatment Plant and therefore, no effluent flow volumes or samples were collected from wastewater sampling point WW-014102. Seepage testing of the three lagoons was performed between August 26, 2014 and September 22, 2014. Seepage rates from Lagoons 1 and 2 were below the 0.25 inches/day requirement; however, Lagoon 3 was above the 0.25 inches/day. Lagoon 3 has been isolated and is being evaluated for future use or permanent removal from service.« less
Computing at DESY — current setup, trends and strategic directions
NASA Astrophysics Data System (ADS)
Ernst, Michael
1998-05-01
Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.
The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.
ERIC Educational Resources Information Center
Lach, Ivan J.
The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…
Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES
NASA Technical Reports Server (NTRS)
Hoerger, J.
1984-01-01
Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.
Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.
Brodish, D L
1998-01-01
The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.
A large high vacuum, high pumping speed space simulation chamber for electric propulsion
NASA Technical Reports Server (NTRS)
Grisnik, Stanley P.; Parkes, James E.
1994-01-01
Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
FACILITY 713, DINING ROOM CABINET DOORS AND DOORS FROM LIVING ...
FACILITY 713, DINING ROOM CABINET DOORS AND DOORS FROM LIVING ROOM TO ENTRY PORCH IN RIGHT BACKGROUND, VIEW FACING NORTHWEST. - Schofield Barracks Military Reservation, Central-Entry Single-Family Housing Type, Between Bragg & Grime Streets near Ayres Avenue, Wahiawa, Honolulu County, HI
24 CFR 266.205 - Ineligible projects.
Code of Federal Regulations, 2013 CFR
2013-04-01
... designed for the elderly with extensive services and luxury accommodations that provide for central kitchens and dining rooms with food service or mandatory services. (d) Nursing homes or intermediate care facilities. Nursing homes and intermediate care facilities licensed and regulated by State or local...
24 CFR 266.205 - Ineligible projects.
Code of Federal Regulations, 2012 CFR
2012-04-01
... designed for the elderly with extensive services and luxury accommodations that provide for central kitchens and dining rooms with food service or mandatory services. (d) Nursing homes or intermediate care facilities. Nursing homes and intermediate care facilities licensed and regulated by State or local...
24 CFR 266.205 - Ineligible projects.
Code of Federal Regulations, 2010 CFR
2010-04-01
... designed for the elderly with extensive services and luxury accommodations that provide for central kitchens and dining rooms with food service or mandatory services. (d) Nursing homes or intermediate care facilities. Nursing homes and intermediate care facilities licensed and regulated by State or local...
24 CFR 266.205 - Ineligible projects.
Code of Federal Regulations, 2011 CFR
2011-04-01
... designed for the elderly with extensive services and luxury accommodations that provide for central kitchens and dining rooms with food service or mandatory services. (d) Nursing homes or intermediate care facilities. Nursing homes and intermediate care facilities licensed and regulated by State or local...
24 CFR 266.205 - Ineligible projects.
Code of Federal Regulations, 2014 CFR
2014-04-01
... designed for the elderly with extensive services and luxury accommodations that provide for central kitchens and dining rooms with food service or mandatory services. (d) Nursing homes or intermediate care facilities. Nursing homes and intermediate care facilities licensed and regulated by State or local...
Computer-Assisted School Facility Planning with ONPASS.
ERIC Educational Resources Information Center
Urban Decision Systems, Inc., Los Angeles, CA.
The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…
ERIC Educational Resources Information Center
Bender, Evelyn
The American Library Association's Carroll Preston Baber Research Award supported this project on the use, impact and feasibility of a computer assisted writing facility located in the library of Stetson Middle School in Philadelphia, an inner city school with a population of minority, "at risk" students. The writing facility consisted…
Sigma 2 Graphic Display Software Program Description
NASA Technical Reports Server (NTRS)
Johnson, B. T.
1973-01-01
A general purpose, user oriented graphic support package was implemented. A comprehensive description of the two software components comprising this package is given: Display Librarian and Display Controller. These programs have been implemented in FORTRAN on the XDS Sigma 2 Computer Facility. This facility consists of an XDS Sigma 2 general purpose computer coupled to a Computek Display Terminal.
Progressive fracture of fiber composites
NASA Technical Reports Server (NTRS)
Irvin, T. B.; Ginty, C. A.
1983-01-01
Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.
VIEW OF BUILDING 122 WHICH HOUSES THE ONSITE MEDICAL FACILITIES ...
VIEW OF BUILDING 122 WHICH HOUSES THE ON-SITE MEDICAL FACILITIES OF THE ROCKY FLATS PLANT AND THE OCCUPATIONAL HEALTH AND INTERNAL DOSIMETRY ORGANIZATIONS. EMERGENCY MEDICAL SERVICES, DIAGNOSIS, DECONTAMINATION, FIRST AID, X-RAY, MINOR SURGICAL TREATMENT, AND AMBULATORY ACTIVITIES ARE CARRIED OUT IN THIS BUILDING. (1/98) - Rocky Flats Plant, Emergency Medical Services Facility, Southwest corner of Central & Third Avenues, Golden, Jefferson County, CO
In-Plant Reuse of Pollution Abated Waters.
1984-08-01
Carbon Treatment Facility Prefilters D-10 Spent Carbon Receiving Tank EZ D-11 Powdered Carbon Feeder System E. Process Chemical Assay/Monitoring...PBA manufacturing complex, several wastewater treatment facilities were built to treat wastewater from various plants . This task deals with...all of which discharge to the Central Treatment Facility (Appendix K-I). The plant is permitted (Appendix I-I) by EPA and consists of a lime/alum
NASA Technical Reports Server (NTRS)
Montegani, F. J.
1974-01-01
Methods of handling one-third-octave band noise data originating from the outdoor full-scale fan noise facility and the engine acoustic facility at the Lewis Research Center are presented. Procedures for standardizing, retrieving, extrapolating, and reporting these data are explained. Computer programs are given which are used to accomplish these and other noise data analysis tasks. This information is useful as background for interpretation of data from these facilities appearing in NASA reports and can aid data exchange by promoting standardization.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., which has been approved by RUS, for improving the telecommunications network of those Telecommunications... plant—The facilities that conduct electrical or optical signals between the central office and the subscriber's network interface or between central offices. Performance bond—A surety bond on a form...
Process control charts in infection prevention: Make it simple to make it happen.
Wiemken, Timothy L; Furmanek, Stephen P; Carrico, Ruth M; Mattingly, William A; Persaud, Annuradha K; Guinn, Brian E; Kelley, Robert R; Ramirez, Julio A
2017-03-01
Quality improvement is central to Infection Prevention and Control (IPC) programs. Challenges may occur when applying quality improvement methodologies like process control charts, often due to the limited exposure of typical IPs. Because of this, our team created an open-source database with a process control chart generator for IPC programs. The objectives of this report are to outline the development of the application and demonstrate application using simulated data. We used Research Electronic Data Capture (REDCap Consortium, Vanderbilt University, Nashville, TN), R (R Foundation for Statistical Computing, Vienna, Austria), and R Studio Shiny (R Foundation for Statistical Computing) to create an open source data collection system with automated process control chart generation. We used simulated data to test and visualize both in-control and out-of-control processes for commonly used metrics in IPC programs. The R code for implementing the control charts and Shiny application can be found on our Web site (https://github.com/ul-research-support/spcapp). Screen captures of the workflow and simulated data indicating both common cause and special cause variation are provided. Process control charts can be easily developed based on individual facility needs using freely available software. Through providing our work free to all interested parties, we hope that others will be able to harness the power and ease of use of the application for improving the quality of care and patient safety in their facilities. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Hazardous Waste Cleanup: HOVENSA, LLC in Christiansted, U.S. Virgin Islands
The HOVENSA facility (the facility) is located at Limetree Bay, St. Croix, U.S. Virgin Islands. It is a petroleum refinery covering 1,500 acres in what is known as South Industrial Complex, on the south central coast of St. Croix.
7 CFR 1710.251 - Construction work plans-distribution borrowers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... generation facilities; (11) Load management equipment, automatic sectionalizing facilities, and centralized... transmission plant, and improvements replacements, and retirements of any generation plant. Construction of new generation capacity need not be included in a CWP but must be specified and supported by specific engineering...
2004-02-19
KENNEDY SPACE CENTER, FLA. - - U.S. Representative Ric Keller (left) listens intently to a presentation proposing the use of the Central Florida Research Park, near Orlando, as the site of NASA’s new Shared Services Center. NASA and Florida officials toured the research park as well. Central Florida leaders are proposing the research park as the site for the center, which would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
NASA Technical Reports Server (NTRS)
Romere, Paul O.; Brown, Steve Wesley
1995-01-01
Development of the Space Shuttle necessitated an extensive wind tunnel test program, with the cooperation of all the major wind tunnels in the United States. The result was approximately 100,000 hours of Space Shuttle wind tunnel testing conducted for aerodynamics, heat transfer, and structural dynamics. The test results were converted into Chrysler DATAMAN computer program format to facilitate use by analysts, a very cost effective method of collecting the wind tunnel test results from many test facilities into one centralized location. This report provides final documentation of the Space Shuttle wind tunnel program. The two-volume set covers the evolution of Space Shuttle aerodynamic configurations and gives wind tunnel test data, titles of wind tunnel data reports, sample data sets, and instructions for accessing the digital data base.
Dial-in flow cytometry data analysis.
Battye, Francis L
2002-02-01
As listmode data files continue to grow larger, access via any kind of network connections becomes more and more trouble because of the enormous traffic generated. The limited speed of transmission via modem makes analysis almost impossible. This unit presents a solution to these problems, one that involves installation at the central storage facility of a small computer program called a Web servlet. Operating in concert with a Web server, the servlet assists the analysis by extracting the display array from the data file and organizing its transmission over the network to a remote client program that creates the data display. The author discusses a recent implementation of this solution and the results for model transmission of two typical data files. The system greatly speeds access to remotely stored data yet retains the flexibility of manipulation expected with local access.
Battery cars on superconducting magnetically levitated carriers: One commuting solution
NASA Technical Reports Server (NTRS)
Briggs, B. Mike; Oman, Henry
1992-01-01
Commuting to work in an urban-suburban metropolitan environment is becoming an unpleasant time-wasting process. We applied the technology of communication management to this commuting problem. Communication management is a system-engineering tool that produced today's efficient telephone network. The resulting best commuting option is magnetically levitated carriers of two-passenger, battery-powered, personally-owned local-travel cars. A commuter drives a car to a nearby station, selects a destination, drives on a waiting carrier, and enters an accelerating ramp. A central computer selects an optimum 100 miles-per-hour trunk route, considering existing and forecast traffic; assigns the commuter a travel slot, and subsequently orders switching-station actions. The commuter uses the expensive facilities for only a few minutes during each trip. The cost of travel could be less than 6 cents per mile.
NASA Technical Reports Server (NTRS)
Romere, Paul O.; Brown, Steve Wesley
1995-01-01
Development of the space shuttle necessitated an extensive wind tunnel test program, with the cooperation of all the major wind tunnels in the United States. The result was approximately 100,000 hours of space shuttle wind tunnel testing conducted for aerodynamics, heat transfer, and structural dynamics. The test results were converted into Chrysler DATAMAN computer program format to facilitate use by analysts, a very cost effective method of collecting the wind tunnel test results from many test facilities into one centralized location. This report provides final documentation of the space shuttle wind tunnel program. The two-volume set covers evolution of space shuttle aerodynamic configurations and gives wind tunnel test data, titles of wind tunnel data reports, sample data sets, and instructions for accessing the digital data base.
Nonequilibrium Supersonic Freestream Studied Using Coherent Anti-Stokes Raman Spectroscopy
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Cantu, Luca M.; Gallo, Emanuela C. A.; Baurle, Rob; Danehy, Paul M.; Rockwell, Robert; Goyne, Christopher; McDaniel, Jim
2015-01-01
Measurements were conducted at the University of Virginia Supersonic Combustion Facility of the flow in a constant-area duct downstream of a Mach 2 nozzle. The airflow was heated to approximately 1200 K in the facility heater upstream of the nozzle. Dual-pump coherent anti-Stokes Raman spectroscopy was used to measure the rotational and vibrational temperatures of N2 and O2 at two planes in the duct. The expectation was that the vibrational temperature would be in equilibrium, because most scramjet facilities are vitiated air facilities and are in vibrational equilibrium. However, with a flow of clean air, the vibrational temperature of N2 along a streamline remains approximately constant between the measurement plane and the facility heater, the vibrational temperature of O2 in the duct is about 1000 K, and the rotational temperature is consistent with the isentropic flow. The measurements of N2 vibrational temperature enabled cross-stream nonuniformities in the temperature exiting the facility heater to be documented. The measurements are in agreement with computational fluid dynamics models employing separate lumped vibrational and translational/rotational temperatures. Measurements and computations are also reported for a few percent steam addition to the air. The effect of the steam is to bring the flow to thermal equilibrium, also in agreement with the computational fluid dynamics.
JESS facility modification and environmental/power plans
NASA Technical Reports Server (NTRS)
Bordeaux, T. A.
1984-01-01
Preliminary plans for facility modifications and environmental/power systems for the JESS (Joint Exercise Support System) computer laboratory and Freedom Hall are presented. Blueprints are provided for each of the facilities and an estimate of the air conditioning requirements is given.
Ergonomic and Anthropometric Considerations of the Use of Computers in Schools by Adolescents
ERIC Educational Resources Information Center
Jermolajew, Anna M.; Newhouse, C. Paul
2003-01-01
Over the past decade there has been an explosion in the provision of computing facilities in schools for student use. However, there is concern that the development of these facilities has often given little regard to the ergonomics of the design for use by children, particularly adolescents. This paper reports on a study that investigated the…
47 CFR 73.208 - Reference points and distance computations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...
47 CFR 73.208 - Reference points and distance computations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...
117. Back side technical facilities S.R. radar transmitter & computer ...
117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
122. Back side technical facilities S.R. radar transmitter & computer ...
122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
118. Back side technical facilities S.R. radar transmitter & computer ...
118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
121. Back side technical facilities S.R. radar transmitter & computer ...
121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
40 CFR 437.1 - General applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... centralized silver recovery from used photographic or x-ray materials activities. The discharge resulting from centralized silver recovery from used photographic or x-ray materials that is treated at a CWT facility along... Nickel Subcategory), Subpart X (Secondary Precious Metals Subcategory), Subpart Z (Secondary Tantalum...
40 CFR 437.1 - General applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... centralized silver recovery from used photographic or x-ray materials activities. The discharge resulting from centralized silver recovery from used photographic or x-ray materials that is treated at a CWT facility along... Nickel Subcategory), subpart X (Secondary Precious Metals Subcategory), subpart Z (Secondary Tantalum...
KSC Headquarters Building Groundbreaking Ceremony
2014-10-07
Groundbreaking for the new Central Campus will take place in the Industrial Area at NASA's Kennedy Space Center in Florida. A scale model of the new facility and landscaping is on display for the ceremony. Kennedy is transforming into a multi-user, 21st century spaceport supporting both commercial and government users and operations. Central Campus Phase I includes construction of a new Headquarters Building as one of the major components of the strategy. The new Headquarters Building will be a seven-story, 200,000-square-foot facility that will house about 500 NASA civil service and contractor employees.
Making Cloud Computing Available For Researchers and Innovators (Invited)
NASA Astrophysics Data System (ADS)
Winsor, R.
2010-12-01
High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.
The development of the Canadian Mobile Servicing System Kinematic Simulation Facility
NASA Technical Reports Server (NTRS)
Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.
1989-01-01
Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
EPA Facility Registry Service (FRS): OIL
This dataset contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Oil database. The Oil database contains information on Spill Prevention, Control, and Countermeasure (SPCC) and Facility Response Plan (FRP) subject facilities to prevent and respond to oil spills. FRP facilities are referred to as substantial harm facilities due to the quantities of oil stored and facility characteristics. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to Oil facilities once the Oil data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
Room To Grow? Facilities Programming for Colleges and Universities.
ERIC Educational Resources Information Center
Thompson, Roger; Adams, Tom
2001-01-01
Asserts that campus space needs could be remedied by moving centrally located service delivery organizations, such as fleet vehicle maintenance facilities. Describes the process of operational and space needs assessment; this process provides information that enables architects to plan for appropriate adjacencies, correct space allocation, and…
7 CFR 3560.408 - Lease of security property.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Lease of security property. 3560.408 Section 3560.408 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, DEPARTMENT OF... facilities related to a housing project (e.g., central kitchens, recreation facilities, laundry rooms, and...
FACILITY 713, LIVING ROOM SHOWING DIAMONDPANED WINDOWS FLANKING THE FIREPLACE, ...
FACILITY 713, LIVING ROOM SHOWING DIAMOND-PANED WINDOWS FLANKING THE FIREPLACE, AND LEADED-GLASS WINDOWS IN DINING ROOM IN RIGHT BACKGROUND, VIEW FACING SOUTHEAST. - Schofield Barracks Military Reservation, Central-Entry Single-Family Housing Type, Between Bragg & Grime Streets near Ayres Avenue, Wahiawa, Honolulu County, HI
Public computing options for individuals with cognitive impairments: survey outcomes.
Fox, Lynn Elizabeth; Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Prideaux, Jason
2009-09-01
To examine availability and accessibility of public computing for individuals with cognitive impairment (CI) who reside in the USA. A telephone survey was administered as a semi-structured interview to 145 informants representing seven types of public facilities across three geographically distinct regions using a snowball sampling technique. An Internet search of wireless (Wi-Fi) hotspots supplemented the survey. Survey results showed the availability of public computer terminals and Internet hotspots was greatest in the urban sample, followed by the mid-sized and rural cities. Across seven facility types surveyed, libraries had the highest percentage of access barriers, including complex queue procedures, login and password requirements, and limited technical support. University assistive technology centres and facilities with a restricted user policy, such as brain injury centres, had the lowest incidence of access barriers. Findings suggest optimal outcomes for people with CI will result from a careful match of technology and the user that takes into account potential barriers and opportunities to computing in an individual's preferred public environments. Trends in public computing, including the emergence of widespread Wi-Fi and limited access to terminals that permit auto-launch applications, should guide development of technology designed for use in public computing environments.
A Benders based rolling horizon algorithm for a dynamic facility location problem
Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.
2016-06-28
This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
Key Issues in Instructional Computer Graphics.
ERIC Educational Resources Information Center
Wozny, Michael J.
1981-01-01
Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…
EPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL - 2002
To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...
Telecommunications and Data Communication in Korea.
ERIC Educational Resources Information Center
Ahn, Moon-Suk
All facilities of the Ministry of Communications of Korea, which monopolizes telecommunications services in the country, are listed and described. Both domestic facilities, including long-distance telephone and telegraph circuits, and international connections are included. Computer facilities are also listed. The nation's regulatory policies are…
AstroGrid-D: Grid technology for astronomical science
NASA Astrophysics Data System (ADS)
Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve
2011-02-01
We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.
Use of cloud computing technology in natural hazard assessment and emergency management
NASA Astrophysics Data System (ADS)
Webley, P. W.; Dehn, J.
2015-12-01
During a natural hazard event, the most up-to-date data needs to be in the hands of those on the front line. Decision support system tools can be developed to provide access to pre-made outputs to quickly assess the hazard and potential risk. However, with the ever growing availability of new satellite data as well as ground and airborne data generated in real-time there is a need to analyze the large volumes of data in an easy-to-access and effective environment. With the growth in the use of cloud computing, where the analysis and visualization system can grow with the needs of the user, then these facilities can used to provide this real-time analysis. Think of a central command center uploading the data to the cloud compute system and then those researchers in-the-field connecting to a web-based tool to view the newly acquired data. New data can be added by any user and then viewed instantly by anyone else in the organization through the cloud computing interface. This provides the ideal tool for collaborative data analysis, hazard assessment and decision making. We present the rationale for developing a cloud computing systems and illustrate how this tool can be developed for use in real-time environments. Users would have access to an interactive online image analysis tool without the need for specific remote sensing software on their local system therefore increasing their understanding of the ongoing hazard and mitigate its impact on the surrounding region.
Percolation Centrality: Quantifying Graph-Theoretic Impact of Nodes during Percolation in Networks
Piraveenan, Mahendra; Prokopenko, Mikhail; Hossain, Liaquat
2013-01-01
A number of centrality measures are available to determine the relative importance of a node in a complex network, and betweenness is prominent among them. However, the existing centrality measures are not adequate in network percolation scenarios (such as during infection transmission in a social network of individuals, spreading of computer viruses on computer networks, or transmission of disease over a network of towns) because they do not account for the changing percolation states of individual nodes. We propose a new measure, percolation centrality, that quantifies relative impact of nodes based on their topological connectivity, as well as their percolation states. The measure can be extended to include random walk based definitions, and its computational complexity is shown to be of the same order as that of betweenness centrality. We demonstrate the usage of percolation centrality by applying it to a canonical network as well as simulated and real world scale-free and random networks. PMID:23349699
Overview of the NASA Dryden Flight Research Facility aeronautical flight projects
NASA Technical Reports Server (NTRS)
Meyer, Robert R., Jr.
1992-01-01
Several principal aerodynamics flight projects of the NASA Dryden Flight Research Facility are discussed. Key vehicle technology areas from a wide range of flight vehicles are highlighted. These areas include flight research data obtained for ground facility and computation correlation, applied research in areas not well suited to ground facilities (wind tunnels), and concept demonstration.
Sea/Lake Water Air Conditioning at Naval Facilities.
1980-05-01
ECONOMICS AT TWO FACILITIES ......... ................... 2 Facilities ........... .......................... 2 Computer Models...of an operational test at Naval Security Group Activity (NSGA) Winter Harbor, Me., and the economics of Navywide application. In FY76 an assessment of... economics of Navywide application of sea/lake water AC indicated that cost and energy savings at the sites of some Naval facilities are possible, depending
Ogata, Y; Nishizawa, K
1995-10-01
An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... Facility: Central New York Oil and Gas Company, LLC (Susquehanna River), Wilmot Township, Bradford County...: Central New York Oil and Gas Company, LLC, Wilmot Township, Bradford County, Pa. Application for... hearing are both new projects and certain projects that were acted upon at the Commission's December 15...
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.
Jack Rabbit Pretest 2021E PT3 Photonic Doppler Velocimetry Data Volume 3 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT3 was fired on March 12, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT3, 120 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on the centralmore » axis and at 10, 20, 25, 30, 35, 40, 50 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The earliest PDV signal extinction was 41.7 microseconds at 30 millimeters. The latest PDV signal extinction time was 65.0 microseconds at 10 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 40 millimeters at 10.9 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1636 meters per second. At 40 millimeters the last measured velocity was 2056 meters per second. The low-to-high velocity ratio was 0.80. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 64.6 kilobars at 15.7 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 2.2 microseconds.« less
Jack Rabbit Pretest 2021E PT4 Photonic Doppler Velocimetry Data Volume 4 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT4 was fired on March 19, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT4, 120 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on the centralmore » axis and at 10, 20, 25, 30, 35, 40, 50 millimeters from the central axis. The experiment was shot at an ambient room temperature of 64 degrees Fahrenheit. The earliest PDV signal extinction was 44.9 microseconds at 30 millimeters. The latest PDV signal extinction time was 69.5 microseconds at 10 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 50 millimeters at 13.3 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1558 meters per second. At 40 millimeters the last measured velocity was 2019 meters per second. The low-to-high velocity ratio was 0.77. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 98.6 kilobars at 15.0 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 0.7 microseconds.« less
Jack Rabbit Pretest 2021E PT5 Photonic Doppler Velocimetry Data Volume 5 Section 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M M; Strand, O T; Bosson, S T
The Jack Rabbit Pretest (PT) 2021E PT5 was fired on March 17, 2008 at the Contained Firing Facility, Site 300, Lawrence Livermore National Laboratory. This experiment is part of an effort to determine the properties of LX-17 in a regime where corner-turning behavior and dead-zone formation are not well understood. Photonic Doppler Velocimetry (PDV) measured diagnostic plate velocities confirming the presence of a persistent LX-17 dead-zone formation and the resultant impulse gradient applied under the diagnostic plate. The Jack Rabbit Pretest 2021E PT5, 160 millimeter diameter experiment returned data on all eight PDV probes. The probes measured on the centralmore » axis and at 20, 30, 35, 45, 55, 65, 75 millimeters from the central axis. The experiment was shot at an ambient room temperature of 65 degrees Fahrenheit. The earliest PDV signal extinction was 40.0 microseconds at 45 millimeters. The latest PDV signal extinction time was 64.9 microseconds at 20 millimeters. The measured velocity ranged from meters per second to thousands of meters per second. First detonation wave induced jump-off was measured at 55 millimeters at 12.8 microseconds. The PDV data provided an unambiguous indication of dead-zone formation and an impulse gradient applied to the diagnostic plate. The central axis had a last measured velocity of 1877 meters per second. At 65 millimeters the last measured velocity was 2277 meters per second. The low-to-high velocity ratio was 0.82. Velocity data was integrated to compute diagnostic plate cross section profiles. Velocity data was differentiated to compute a peak pressure under the diagnostic plate at the central axis of 78 kilobars at 11.9 and 21.2 microseconds. Substantial motion (>1 m/s) of the diagnostic plate over the dead-zone is followed by detonation region motion within approximately 4.1 microseconds.« less
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
Singh, Tarundeep; Roy, Pritam; Jamir, Limalemla; Gupta, Saurav; Kaur, Navpreet; Jain, D. K.; Kumar, Rajesh
2016-01-01
Objective A rapid survey was carried out in Shaheed Bhagat Singh Nagar District of Punjab state in India to ascertain health seeking behavior and out-of-pocket health expenditures. Methods Using multistage cluster sampling design, 1,008 households (28 clusters x 36 households in each cluster) were selected proportionately from urban and rural areas. Households were selected through a house-to-house survey during April and May 2014 whose members had (a) experienced illness in the past 30 days, (b) had illness lasting longer than 30 days, (c) were hospitalized in the past 365 days, or (d) had women who were currently pregnant or experienced childbirth in the past two years. In these selected households, trained investigators, using a tablet computer-based structured questionnaire, enquired about the socio-demographics, nature of illness, source of healthcare, and healthcare and household expenditure. The data was transmitted daily to a central server using wireless communication network. Mean healthcare expenditures were computed for various health conditions. Catastrophic healthcare expenditure was defined as more than 10% of the total annual household expenditure on healthcare. Chi square test for trend was used to compare catastrophic expenditures on hospitalization between households classified into expenditure quartiles. Results The mean monthly household expenditure was 15,029 Indian Rupees (USD 188.2). Nearly 14.2% of the household expenditure was on healthcare. Fever, respiratory tract diseases, gastrointestinal diseases were the common acute illnesses, while heart disease, diabetes mellitus, and respiratory diseases were the more common chronic diseases. Hospitalizations were mainly due to cardiovascular diseases, gastrointestinal problems, and accidents. Only 17%, 18%, 20% and 31% of the healthcare for acute illnesses, chronic illnesses, hospitalizations and childbirth was sought in the government health facilities. Average expenditure in government health facilities was 16.6% less for acute care, 15% less for hospitalization and 50% less for childbirth than in the private healthcare facilities. Out-of-pocket expenditure was mostly on medicines followed by diagnostic and laboratory tests. Among households experiencing hospitalization, 56.5% had incurred catastrophic expenditures, which was significantly higher in the poorest compared to richest household expenditure quartile (p <0.002). Conclusions Expenditure on healthcare remains high in Punjab state of India. Efforts to increase utilization of the public sector could decrease out-of-pocket healthcare expenditure. PMID:27351743
Singh, Tarundeep; Roy, Pritam; Jamir, Limalemla; Gupta, Saurav; Kaur, Navpreet; Jain, D K; Kumar, Rajesh
2016-01-01
A rapid survey was carried out in Shaheed Bhagat Singh Nagar District of Punjab state in India to ascertain health seeking behavior and out-of-pocket health expenditures. Using multistage cluster sampling design, 1,008 households (28 clusters x 36 households in each cluster) were selected proportionately from urban and rural areas. Households were selected through a house-to-house survey during April and May 2014 whose members had (a) experienced illness in the past 30 days, (b) had illness lasting longer than 30 days, (c) were hospitalized in the past 365 days, or (d) had women who were currently pregnant or experienced childbirth in the past two years. In these selected households, trained investigators, using a tablet computer-based structured questionnaire, enquired about the socio-demographics, nature of illness, source of healthcare, and healthcare and household expenditure. The data was transmitted daily to a central server using wireless communication network. Mean healthcare expenditures were computed for various health conditions. Catastrophic healthcare expenditure was defined as more than 10% of the total annual household expenditure on healthcare. Chi square test for trend was used to compare catastrophic expenditures on hospitalization between households classified into expenditure quartiles. The mean monthly household expenditure was 15,029 Indian Rupees (USD 188.2). Nearly 14.2% of the household expenditure was on healthcare. Fever, respiratory tract diseases, gastrointestinal diseases were the common acute illnesses, while heart disease, diabetes mellitus, and respiratory diseases were the more common chronic diseases. Hospitalizations were mainly due to cardiovascular diseases, gastrointestinal problems, and accidents. Only 17%, 18%, 20% and 31% of the healthcare for acute illnesses, chronic illnesses, hospitalizations and childbirth was sought in the government health facilities. Average expenditure in government health facilities was 16.6% less for acute care, 15% less for hospitalization and 50% less for childbirth than in the private healthcare facilities. Out-of-pocket expenditure was mostly on medicines followed by diagnostic and laboratory tests. Among households experiencing hospitalization, 56.5% had incurred catastrophic expenditures, which was significantly higher in the poorest compared to richest household expenditure quartile (p <0.002). Expenditure on healthcare remains high in Punjab state of India. Efforts to increase utilization of the public sector could decrease out-of-pocket healthcare expenditure.
Automatic Mexican sign language and digits recognition using normalized central moments
NASA Astrophysics Data System (ADS)
Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina
2016-09-01
This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.
Hazardous Waste Cleanup: TAPI Puerto Rico Incorporated in Guayama, Puerto Rico
The TAPI facility is located on the southeastern coastal plain of Puerto Rico. The facility is about 1.1 miles north of the Caribbean Sea and 3.5 miles south of the foothills of the Cordillera Central Mountains. The Town of Guayama is located approximately
18 CFR 367.1850 - Account 185, Temporary facilities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 185, Temporary facilities. 367.1850 Section 367.1850 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO...
Report of the National Libraries Committee.
ERIC Educational Resources Information Center
Department of Education and Science, London (England).
The study was undertaken to examine the functions and organization of the British Museum Library, the National Central Library, the National Lending Library for Science and Technology, and the Science Museum Library in providing national library facilities; to consider whether in the interests of efficiency and economy such facilities should be…
12 CFR 725.17 - Applications for extensions of credit.
Code of Federal Regulations, 2010 CFR
2010-01-01
... NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.17 Applications for extensions of credit. (a) A Regular member may apply for a Facility advance to meet its liquidity needs by filing an... Agent by its member natural person credit unions for pending loans to meet liquidity needs; or (ii...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Capital stock. 725.5 Section 725.5 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.5 Capital stock. (a) The capital stock of the Facility is divided...
Overview taken from the corner of Avenue E (Russell Avenue) ...
Overview taken from the corner of Avenue E (Russell Avenue) and Central Avenue. Facility 167 in background. View facing northwest - U.S. Naval Base, Pearl Harbor, Administration Annex, Near Russell Avenue (previously Avenue E), between of Facility Nos. 1C & 1E , Pearl City, Honolulu County, HI
ERIC Educational Resources Information Center
Kaleba, Frank
2008-01-01
The central problem for the facility manager of large portfolios is not the accuracy of data, but rather data integrity. Data integrity means that it's (1) acceptable to the users; (2) based upon an objective source; (3) reproducible; and (4) internally consistent. Manns and Katsinas, in their January/February 2006 Facilities Manager article…
ERIC Educational Resources Information Center
WITMER, DAVID R.
WISCONSIN STATE UNIVERSITIES HAVE BEEN USING THE COMPUTER AS A MANAGEMENT TOOL TO STUDY PHYSICAL FACILITIES INVENTORIES, SPACE UTILIZATION, AND ENROLLMENT AND PLANT PROJECTIONS. EXAMPLES ARE SHOWN GRAPHICALLY AND DESCRIBED FOR DIFFERENT TYPES OF ANALYSIS, SHOWING THE CARD FORMAT, CODING SYSTEMS, AND PRINTOUT. EQUATIONS ARE PROVIDED FOR DETERMINING…
Artificial intelligence issues related to automated computing operations
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1989-01-01
Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.
120. Back side technical facilities S.R. radar transmitter & computer ...
120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
119. Back side technical facilities S.R. radar transmitter & computer ...
119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
McGuire, Megan; Pinoges, Loretxu; Kanapathipillai, Rupa; Munyenyembe, Tamika; Huckabee, Martha; Makombe, Simon; Szumilin, Elisabeth; Heinzelmann, Annette; Pujades-Rodríguez, Mar
2012-01-01
To describe patient antiretroviral therapy (cART) outcomes associated with intensive decentralization of services in a rural HIV program in Malawi. Longitudinal analysis of data from HIV-infected patients starting cART between August 2001 and December 2008 and of a cross-sectional immunovirological assessment conducted 12 (±2) months after therapy start. One-year mortality, lost to follow-up, and attrition (deaths and lost to follow-up) rates were estimated with exact Poisson 95% confidence intervals (CI) by type of care delivery and year of initiation. Association of virological suppression (<50 copies/mL) and immunological success (CD4 gain ≥100 cells/µL), with type of care was investigated using multiple logistic regression. During the study period, 4322 cART patients received centralized care and 11,090 decentralized care. At therapy start, patients treated in decentralized health facilities had higher median CD4 count levels (167 vs. 130 cell/µL, P<0.0001) than other patients. Two years after cART start, program attrition was lower in decentralized than centralized facilities (9.9 per 100 person-years, 95% CI: 9.5-10.4 vs. 20.8 per 100 person-years, 95% CI: 19.7-22.0). One year after treatment start, differences in immunological success (adjusted OR=1.23, 95% CI: 0.83-1.83), and viral suppression (adjusted OR=0.80, 95% CI: 0.56-1.14) between patients followed at centralized and decentralized facilities were not statistically significant. In rural Malawi, 1- and 2-year program attrition was lower in decentralized than in centralized health facilities and no statistically significant differences in one-year immunovirological outcomes were observed between the two health care levels. Longer follow-up is needed to confirm these results.
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
33 CFR 106.305 - Facility Security Assessment (FSA) requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., including computer systems and networks; (vi) Existing agreements with private security companies; (vii) Any... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Facility Security Assessment (FSA... SECURITY MARITIME SECURITY MARINE SECURITY: OUTER CONTINENTAL SHELF (OCS) FACILITIES Outer Continental...
2016-03-23
cleaned so that they are free of dust, dirt, lint and human waste, and trash.” However, the contract did not explicitly state that the facilities...be free of mold/mildew. ACC–RI and ARCENT should review and modify the basic life support services contract, as necessary, to include measures...Responsibility, “The Sand Book,” July 18, 2014. 16 Unified Facility Criteria 1-202-01, “Host Nation Facilities in Support of Military Operations,” September 1
The Hydrologic Instrumentation Facility of the U.S. Geological Survey
Wagner, C.R.; Jeffers, Sharon
1984-01-01
The U.S. Geological Survey Water Resources Division has improved support to the agencies field offices by the consolidation of all instrumentation support services in a single facility. This facility known as the Hydrologic Instrumentation Facility (HIF) is located at the National Space Technology Laboratory, Mississippi, about 50 miles east of New Orleans, Louisiana. The HIF is responsible for design and development, testing, evaluation, procurement, warehousing, distribution and repair of a variety of specialized hydrologic instrumentation. The centralization has resulted in more efficient and effective support of the Survey 's hydrologic programs. (USGS)
NASA Center for Computational Sciences: History and Resources
NASA Technical Reports Server (NTRS)
2000-01-01
The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.
Reasons for Discontinuing Hashish Use in a Group of Central European Athletes.
ERIC Educational Resources Information Center
Duncan, David F.
1988-01-01
Examined self-reported reasons for discontinuing marijuana use among 61 former marijuana using students at central European sports training facility. Most common reasons given for discontinuing marijuana use were dislike of effects, athletic training regimen, health reasons, and mental/emotional problems. (Author/NB)
View of first level from east looking at the central ...
View of first level from east looking at the central bay. Interstitial structure is in the foreground center, main structure is in background left and right of view. - Marshall Space Flight Center, Saturn V Dynamic Test Facility, East Test Area, Huntsville, Madison County, AL
The Education Value of Cloud Computing
ERIC Educational Resources Information Center
Katzan, Harry, Jr.
2010-01-01
Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…
Writing Apprehension, Computer Anxiety and Telecomputing: A Pilot Study.
ERIC Educational Resources Information Center
Harris, Judith; Grandgenett, Neal
1992-01-01
A study measured graduate students' writing apprehension and computer anxiety levels before and after using electronic mail, computer conferencing, and remote database searching facilities during an educational technology course. Results indicted postcourse computer anxiety levels significantly related to usage statistics. Precourse writing…
Los Alamos Plutonium Facility Waste Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K.; Montoya, A.; Wieneke, R.
1997-02-01
This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facilitymore » on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Michael G.
This quality assurance project plan describes the technical requirements and quality assurance activities of the environmental data collection/analyses operations to close Central Facilities Area Sewage treatment Plant Lagoon 3 and the land application area. It describes the organization and persons involved, the data quality objectives, the analytical procedures, and the specific quality control measures to be employed. All quality assurance project plan activities are implemented to determine whether the results of the sampling and monitoring performed are of the right type, quantity, and quality to satisfy the requirements for closing Lagoon 3 and the land application area.
The Organization and Evaluation of a Computer-Assisted, Centralized Immunization Registry.
ERIC Educational Resources Information Center
Loeser, Helen; And Others
1983-01-01
Evaluation of a computer-assisted, centralized immunization registry after one year shows that 93 percent of eligible health practitioners initially agreed to provide data and that 73 percent continue to do so. Immunization rates in audited groups have improved significantly. (GC)
2004-02-19
KENNEDY SPACE CENTER, FLA. - KSC Director Jim Kennedy makes a presentation to NASA and other officials about the benefits of locating NASA’s new Shared Services Center in the Central Florida Research Park, near Orlando. At the far left is Pamella J. Dana, Ph.D., director, Office of Tourism, Trade, and Economic Development in Florida. Central Florida leaders are proposing the research park as the site for the NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
NASA Technical Reports Server (NTRS)
1979-01-01
A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.
Continuous optical monitoring of a near-shore sea-water column
NASA Astrophysics Data System (ADS)
Bensky, T. J.; Neff, B.
2006-12-01
Cal Poly San Luis Obispo runs the Central Coast Marine Sciences Center, south-facing, 1-km-long pier in San Luis Bay, on the west coast of California, midway between Los Angeles and San Fransisco. The facility is secure and dedicated to marine science research. We have constructed an automated optical profiling system that collects sunlight samples, in half-foot increments, from a 30 foot vertical column of sea-water below the pier. Our implementation lowers a high quality, optically pure fiber cable into the water at 30 minute intervals. Light collected by the submersed fiber aperture is routed to the pier surface where it is spectrally analyzed using an Ocean Optics HR2000 spectrometer. The spectrometer instantly yields the spectrum of the light collected at a given depth. The "spectrum" here is light intensity as a function of wavelength between 200 and 1100 nm in increments of 0.1 nm. Each dive of the instrument takes approximately 80 seconds, lowers the fiber from the surface to a depth of 30 feet, and yields approximately 60 spectra, each one taken at a such successively larger depth. A computer logs each spectra as a function of depth. From such data, we are able to extract total downward photon flux, quantify ocean color, and compute attenuation coefficients. The system is entirely autonomous, includes an integrated data-browser, and can be checked-on, or even controlled over the Internet, using a web-browser. Linux runs the computer, data is logged directly to a mySQL database for easy extraction, and a PHP-script ties the system together. Current work involves studying light-energy deposition trends and effects of surface action on downward photon flux. This work has been funded by the Office of Naval Research (ONR) and the California Central Coast Research Park Initiative (C3RP).
Taylor, Michael J; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah
2017-01-01
Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility.
Atmospheric concentrations of polybrominated diphenyl ethers at near-source sites.
Cahill, Thomas M; Groskova, Danka; Charles, M Judith; Sanborn, James R; Denison, Michael S; Baker, Lynton
2007-09-15
Concentrations of polybrominated diphenyl ethers (PBDEs) were determined in air samples from near suspected sources, namely an indoors computer laboratory, indoors and outdoors at an electronics recycling facility, and outdoors at an automotive shredding and metal recycling facility. The results showed that (1) PBDE concentrations in the computer laboratorywere higherwith computers on compared with the computers off, (2) indoor concentrations at an electronics recycling facility were as high as 650,000 pg/m3 for decabromodiphenyl ether (PBDE 209), and (3) PBDE 209 concentrations were up to 1900 pg/m3 at the downwind fenceline at an automotive shredding/metal recycling facility. The inhalation exposure estimates for all the sites were typically below 110 pg/kg/day with the exception of the indoor air samples adjacent to the electronics shredding equipment, which gave exposure estimates upward of 40,000 pg/kg/day. Although there were elevated inhalation exposures at the three source sites, the exposure was not expected to cause adverse health effects based on the lowest reference dose (RfD) currently in the Integrated Risk Information System (IRIS), although these RfD values are currently being re-evaluated by the U.S. Environmental Protection Agency. More research is needed on the potential health effects of PBDEs.
Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah
2015-01-01
Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved implementing a questionnaire investigating attitudes and access to computer technologies of respiratory outpatients, in order to assess potential for use of virtual worlds to facilitate health-related education for this sample. Ninety-four patients with a chronic respiratory condition completed surveys, which were distributed at a Chest Clinic. In accordance with our prediction, younger participants were more likely to be able to use, and have access to a computer and some patients were keen to explore use virtual worlds for healthcare-related purposes: Of those with access to computer facilities, 14.50% expressed a willingness to attend a virtual world focus group. Results indicate future virtual world health education facilities should be designed to cater for younger patients, because this group are most likely to accept and use such facilities. Within the study sample, this is likely to comprise of people diagnosed with asthma. Future work could investigate the potential of creating a virtual world asthma education facility. PMID:28239187
Coal-Fired Boilers at Navy Bases, Navy Energy Guidance Study, Phase II and III.
1979-05-01
several sizes were performed. Central plants containing four equal-sized boilers and central flue gas desulfurization facilities were shown to be less...Conceptual design and parametric cost studies of steam and power generation systems using coal-fired stoker boilers and stack gas scrubbers in
12 CFR 725.4 - Agent membership.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Agent membership. 725.4 Section 725.4 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.4 Agent membership. (a) A central credit union or a group of...
Central Libraries in Uncertain Times.
ERIC Educational Resources Information Center
Kenney, Brian J.
2001-01-01
Discusses security and safety issues for public libraries, especially high-profile central facilities, in light of the September 11 terrorist attacks. Highlights include inspecting bags as patrons enter as well as exit; the need for security guidelines for any type of disaster or emergency; building design; and the importance of communication.…
Chibuye, Peggy S; Bazant, Eva S; Wallon, Michelle; Rao, Namratha; Fruhauf, Timothee
2018-01-25
Luapula Province has the highest maternal mortality and one of the lowest facility-based births in Zambia. The distance to facilities limits facility-based births for women in rural areas. In 2013, the government incorporated maternity homes into the health system at the community level to increase facility-based births and reduce maternal mortality. To examine the experiences with maternity homes, formative research was undertaken in four districts of Luapula Province to assess women's and community's needs, use patterns, collaboration between maternity homes, facilities and communities, and promising practices and models in Central and Lusaka Provinces. A cross-sectional, mixed-methods design was used. In Luapula Province, qualitative data were collected through 21 focus group discussions with 210 pregnant women, mothers, elderly women, and Safe Motherhood Action Groups (SMAGs) and 79 interviews with health workers, traditional leaders, couples and partner agency staff. Health facility assessment tools, service abstraction forms and registers from 17 facilities supplied quantitative data. Additional qualitative data were collected from 26 SMAGs and 10 health workers in Central and Lusaka Provinces to contextualise findings. Qualitative transcripts were analysed thematically using Atlas-ti. Quantitative data were analysed descriptively using Stata. Women who used maternity homes recognized the advantages of facility-based births. However, women and community groups requested better infrastructure, services, food, security, privacy, and transportation. SMAGs led the construction of maternity homes and advocated the benefits to women and communities in collaboration with health workers, but management responsibilities of the homes remained unassigned to SMAGs or staff. Community norms often influenced women's decisions to use maternity homes. Successful maternity homes in Central Province also relied on SMAGs for financial support, but the sustainability of these models was not certain. Women and communities in the selected facilities accept and value maternity homes. However, interventions are needed to address women's needs for better infrastructure, services, food, security, privacy and transportation. Strengthening relationships between the managers of the homes and their communities can serve as the foundation to meet the needs and expectations of pregnant women. Particular attention should be paid to ensuring that maternity homes meet quality standards and remain sustainable.
Providing security for automated process control systems at hydropower engineering facilities
NASA Astrophysics Data System (ADS)
Vasiliev, Y. S.; Zegzhda, P. D.; Zegzhda, D. P.
2016-12-01
This article suggests the concept of a cyberphysical system to manage computer security of automated process control systems at hydropower engineering facilities. According to the authors, this system consists of a set of information processing tools and computer-controlled physical devices. Examples of cyber attacks on power engineering facilities are provided, and a strategy of improving cybersecurity of hydropower engineering systems is suggested. The architecture of the multilevel protection of the automated process control system (APCS) of power engineering facilities is given, including security systems, control systems, access control, encryption, secure virtual private network of subsystems for monitoring and analysis of security events. The distinctive aspect of the approach is consideration of interrelations and cyber threats, arising when SCADA is integrated with the unified enterprise information system.
Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility
NASA Technical Reports Server (NTRS)
Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer
2009-01-01
Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).
IMPLEMENTATION OF USEPA'S METAL FINISHING FACILITY POLLUTION PREVENTION TOOL (MFFP2T) - 2003
To help metal finishing facilities meet the goal of profitable pollution prevention, the USEPA is developing the Metal Finishing Facility Pollution Prevention Tool (MFFP2T), a computer program that estimates the rate of solid, liquid waste generation and air emissions. This progr...
Do Poor Students Benefit from China's Merger Program? Transfer Path and Educational Performance
ERIC Educational Resources Information Center
Chen, Xinxin; Yi, Hongmei; Zhang, Linxiu; Mo, Di; Chu, James; Rozelle, Scott
2014-01-01
Aiming to provide better education facilities and improve the educational attainment of poor rural students, China's government has been merging remote rural primary schools into centralized village, town, or county schools since the late 1990s. To accompany the policy, boarding facilities have been constructed that allow (mandate) primary…
ICD Complex Operations and Maintenance Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, P. L.
2007-06-25
This Operations and Maintenance (O&M) Plan describes how the Idaho National Laboratory (INL) conducts operations, winterization, and startup of the Idaho CERCLA Disposal Facility (ICDF) Complex. The ICDF Complex is the centralized INL facility responsible for the receipt, storage, treatment (as necessary), and disposal of INL Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) remediation waste.
The Development of a Measure of the Parenting Alliance.
ERIC Educational Resources Information Center
Abidin, Richard R.; Brunner, John F.
The Parenting Alliance Inventory (PAI) was administered to 186 mothers and 75 fathers with a wide range of socioeconomic backgrounds who had at least one child between 2 and 6 years of age. Subjects were recruited from child care facilities, pediatric practices, and public recreational facilities in central Virginia. Extrafamilial child caregivers…
1. VIEW LOOKING SOUTH AT BUILDING 771 UNDER CONSTRUCTION. BUILDING ...
1. VIEW LOOKING SOUTH AT BUILDING 771 UNDER CONSTRUCTION. BUILDING 771 WAS ONE OF THE FIRST FOUR MAJOR BUILDINGS AT THE ROCKY FLATS PLANT, BUILDING 771 WAS ORIGINALLY THE PRIMARY FACILITY FOR PLUTONIUM OPERATIONS. (5/29/52) - Rocky Flats Plant, Plutonium Recovery & Fabrication Facility, North-central section of plant, Golden, Jefferson County, CO
Many recent pilot tests have demonstrated the benefits and cost effectiveness of point-of-use treatment technologies as opposed to centralized wastewater treatment for all sizes of plating facilities. A 9-month case study at a small plating facility in Cincinnati, OH utilizing po...
Long Range Development Plan, University of California, San Diego, October 1963.
ERIC Educational Resources Information Center
Alexander, Robert E.
The academic and physical development plans of the University of California at San Diego are outlined. Facilities for 27,500 anticipated students are divided into twelve colleges of about 2300 students each. The twelve colleges are arranged into three clusters of four each, grouped around the central academic and administrative facilities, in…
EPA Facility Registry Service (FRS): CERCLIS
This data provides location and attribute information on Facilities regulated under the Comprehensive Environmental Responsibility Compensation and Liability Information System (CERCLIS) for a intranet web feature service . The data provided in this service are obtained from EPA's Facility Registry Service (FRS). The FRS is an integrated source of comprehensive (air, water, and waste) environmental information about facilities, sites or places. This service connects directly to the FRS database to provide this data as a feature service. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous verification and management procedures that incorporate information from program national systems, state master facility records, data collected from EPA's Central Data Exchange registrations and data management personnel. Additional Information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
Performance Assessment Institute-NV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardo, Joesph
2012-12-31
The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less
Use of imagery and GIS for humanitarian demining management
NASA Astrophysics Data System (ADS)
Gentile, Jack; Gustafson, Glen C.; Kimsey, Mary; Kraenzle, Helmut; Wilson, James; Wright, Stephen
1997-11-01
In the Fall of 1996, the Center for Geographic Information Science at James Madison University became involved in a project for the Department of Defense evaluating the data needs and data management systems for humanitarian demining in the Third World. In particular, the effort focused on the information needs of demining in Cambodia and in Bosnia. In the first phase of the project one team attempted to identify all sources of unclassified country data, image data and map data. Parallel with this, another group collected information and evaluations on most of the commercial off-the-shelf computer software packages for the management of such geographic information. The result was a design for the kinds of data and the kinds of systems necessary to establish and maintain such a database as a humanitarian demining management tool. The second phase of the work involved acquiring the recommended data and systems, integrating the two, and producing a demonstration of the system. In general, the configuration involves ruggedized portable computers for field use with a greatly simplified graphical user interface, supported by a more capable central facility based on Pentium workstations and appropriate technical expertise.
Kernodle, J.M.
1996-01-01
This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.
Composite analysis E-area vaults and saltstone disposal facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, J.R.
1997-09-01
This report documents the Composite Analysis (CA) performed on the two active Savannah River Site (SRS) low-level radioactive waste (LLW) disposal facilities. The facilities are the Z-Area Saltstone Disposal Facility and the E-Area Vaults (EAV) Disposal Facility. The analysis calculated potential releases to the environment from all sources of residual radioactive material expected to remain in the General Separations Area (GSA). The GSA is the central part of SRS and contains all of the waste disposal facilities, chemical separations facilities and associated high-level waste storage facilities as well as numerous other sources of radioactive material. The analysis considered 114 potentialmore » sources of radioactive material containing 115 radionuclides. The results of the CA clearly indicate that continued disposal of low-level waste in the saltstone and EAV facilities, consistent with their respective radiological performance assessments, will have no adverse impact on future members of the public.« less
NASA Astrophysics Data System (ADS)
Sabaibang, S.; Lekchaum, S.; Tipayakul, C.
2015-05-01
This study is a part of an on-going work to develop a computational model of Thai Research Reactor (TRR-1/M1) which is capable of accurately predicting the neutron flux level and spectrum. The computational model was created by MCNPX program and the CT (Central Thimble) in-core irradiation facility was selected as the location for validation. The comparison was performed with the typical flux measurement method routinely practiced at TRR-1/M1, that is, the foil activation technique. In this technique, gold foil is irradiated for a certain period of time and the activity of the irradiated target is measured to derive the thermal neutron flux. Additionally, the flux measurement with SPND (self-powered neutron detector) was also performed for comparison. The thermal neutron flux from the MCNPX simulation was found to be 1.79×1013 neutron/cm2s while that from the foil activation measurement was 4.68×1013 neutron/cm2s. On the other hand, the thermal neutron flux from the measurement using SPND was 2.47×1013 neutron/cm2s. An assessment of the differences among the three methods was done. The difference of the MCNPX with the foil activation technique was found to be 67.8% and the difference of the MCNPX with the SPND was found to be 27.8%.
Experiments in Computing: A Survey
Moisseinen, Nella
2014-01-01
Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general. PMID:24688404
Experiments in computing: a survey.
Tedre, Matti; Moisseinen, Nella
2014-01-01
Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.
Performance evaluation of molten salt thermal storage systems
NASA Astrophysics Data System (ADS)
Kolb, G. J.; Nikolai, U.
1987-09-01
The molton salt thermal storage system located at the Central Receiver Test Facility (CRTF) was recently subjected to thermal performance tests. The system is composed of a hot storage tank containing molten nitrate salt at a temperature of 1050 F and a cold tank containing 550 F salt with associated valves and controls. It is rated at 7 MWht and was designed and installed by Martin Marietta Corporation in 1982. The results of these tests were used to accomplish four objectives: (1) to compare the current thermal performance of the system with the performance of the system soon after it was installed, (2) to validate a dynamic computer model of the system, (3) to obtain an estimate of an annual system efficiency for a hypothetical commercial scale 1200 MWht system and (4) to compare the performance of the CRTF system with thermal storage systems developed by the European solar community.
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator)
1982-01-01
A program established by NASA with the Environmental Research Institute of Michigan (ERIM) applies a network where the major participants are NASA, universities or research institutes, community colleges, and local private and public organizations. Local users are given an opportunity to obtain "hands on" training in LANDSAT data analysis and Geographic Information System (GIS) techniques using a desk top, interactive remote analysis station (RAS). The RAS communicates with a central computing facility via telephone line, and provides for generation of land use and land suitability maps and other data products via remote command. During the period from 22 September 1980 - 6 March 1982, 15 workshops and other training activities were successfully conducted throughout Michigan providing hands on training on the RAS terminals for 250 or more people and user awareness activities such as exhibits and demonstrations for 2,000 or more participants.
Spatially Characterizing Effective Timber Supply
NASA Technical Reports Server (NTRS)
Berry, J. K.; Sailor, J.
1982-01-01
The structure of a computer-oriented cartographic model for assessing roundwood supply for generation of base load electricity is discussed. The model provides an analytical procedure for coupling spatial information of harvesting economics and owner willingness to sell stumpages. Supply is characterized in terms of standing timber; of accessibility considering various harvesting and hauling factors; and of availability as affected by ownership and residential patterns. Factors governing accessibility to timber include effective harvesting distance to haulic roads as modified by barriers and slopes. Haul distance is expressed in units that take into account the relative ease of travel along various road types to a central processing facility. Areas of accessible timber are grouped into spatial units, termed 'timbersheds', of common access to particular haul road segments that belong to unique 'transport zones'. Timber availability considerations include size of ownership parcels, housing density and excluded areas. The analysis techniques are demonstrated for a cartographic data base in western Massachusetts.
PhoneSat - The Smartphone Nanosatellite
NASA Technical Reports Server (NTRS)
Westley, Deborah; Yost, Bruce; Petro, Andrew
2013-01-01
PhoneSat 2.4, carried into space on November 19, 2013 aboard a Minotaur I rocket from the Mid-Atlantic Regional Spaceport at NASAs Wallops Flight Facility in Virginia, is the first of the PhoneSat family to use a two-way S-band radio to allow engineers to command the satellite from Earth. This mission also serves as a technology demonstration for a novel attitude determination and control system (ADCS) that establishes and stabilizes the satellites attitude relative to Earth. Unlike the earlier PhoneSats that used a Nexus One, PhoneSat 2.4 uses the Nexus S smartphone, which runs Googles Android operating system, and is made by Samsung Electronics Co., Suwon, So. Korea. The smartphone provides many of the functions needed by the satellite such as a central computer, data memory, ready-made interfaces for communications, navigation and power all pre-assembled in a rugged electronics package.
NASA Technical Reports Server (NTRS)
1987-01-01
Every U.S. municipality must determine how much waste water it is processing and more importantly, how much is going unprocessed into lakes and streams either because of leaks in the sewer system or because the city's sewage facilities were getting more sewer flow than they were designed to handle. ADS Environmental Services, Inc.'s development of the Quadrascan Flow Monitoring System met the need for an accurate method of data collection. The system consists of a series of monitoring sensors and microcomputers that continually measure water depth at particular sewer locations and report their findings to a central computer. This provides precise information to city managers on overall flow, flow in any section of the city, location and severity of leaks and warnings of potential overload. The core technology has been expanded upon in terms of both technical improvements, and functionality for new applications, including event alarming and control for critical collection system management problems.
Grid of Supergiant B[e] Models from HDUST Radiative Transfer
NASA Astrophysics Data System (ADS)
Domiciano de Souza, A.; Carciofi, A. C.
2012-12-01
By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.
Aerobiology and Its Role in the Transmission of Infectious Diseases
Fernstrom, Aaron; Goldblatt, Michael
2013-01-01
Aerobiology plays a fundamental role in the transmission of infectious diseases. As infectious disease and infection control practitioners continue employing contemporary techniques (e.g., computational fluid dynamics to study particle flow, polymerase chain reaction methodologies to quantify particle concentrations in various settings, and epidemiology to track the spread of disease), the central variables affecting the airborne transmission of pathogens are becoming better known. This paper reviews many of these aerobiological variables (e.g., particle size, particle type, the duration that particles can remain airborne, the distance that particles can travel, and meteorological and environmental factors), as well as the common origins of these infectious particles. We then review several real-world settings with known difficulties controlling the airborne transmission of infectious particles (e.g., office buildings, healthcare facilities, and commercial airplanes), while detailing the respective measures each of these industries is undertaking in its effort to ameliorate the transmission of airborne infectious diseases. PMID:23365758
On-site or off-site treatment of medical waste: a challenge
2014-01-01
Treating hazardous-infectious medical waste can be carried out on-site or off-site of health-care establishments. Nevertheless, the selection between on-site and off-site locations for treating medical waste sometimes is a controversial subject. Currently in Iran, due to policies of Health Ministry, the hospitals have selected on-site-treating method as the preferred treatment. The objectives of this study were to assess the current condition of on-site medical waste treatment facilities, compare on-site medical waste treatment facilities with off-site systems and find the best location of medical waste treatment. To assess the current on-site facilities, four provinces (and 40 active hospitals) were selected to participate in the survey. For comparison of on-site and off-site facilities (due to non availability of an installed off-site facility) Analytical Hierarchy Process (AHP) was employed. The result indicated that most on-site medical waste treating systems have problems in financing, planning, determining capacity of installations, operation and maintenance. AHP synthesis (with inconsistency ratio of 0.01 < 0.1) revealed that, in total, the off-site treatment of medical waste was in much higher priority than the on-site treatment (64.1% versus 35.9%). According to the results of study it was concluded that the off-site central treatment can be considered as an alternative. An amendment could be made to Iran’s current medical waste regulations to have infectious-hazardous waste sent to a central off-site installation for treatment. To begin and test this plan and also receive the official approval, a central off-site can be put into practice, at least as a pilot in one province. Next, if it was practically successful, it could be expanded to other provinces and cities. PMID:24739145
EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...
NASA Technical Reports Server (NTRS)
Pirrello, C. J.; Hardin, R. D.; Capelluro, L. P.; Harrison, W. D.
1971-01-01
The general purpose capabilities of government and industry in the area of real time engineering flight simulation are discussed. The information covers computer equipment, visual systems, crew stations, and motion systems, along with brief statements of facility capabilities. Facility construction and typical operational costs are included where available. The facilities provide for economical and safe solutions to vehicle design, performance, control, and flying qualities problems of manned and unmanned flight systems.
Waweru, Evelyn; Goodman, Catherine; Kedenge, Sarah; Tsofa, Benjamin; Molyneux, Sassy
2016-03-01
In many African countries, user fees have failed to achieve intended access and quality of care improvements. Subsequent user fee reduction or elimination policies have often been poorly planned, without alternative sources of income for facilities. We describe early implementation of an innovative national health financing intervention in Kenya; the health sector services fund (HSSF). In HSSF, central funds are credited directly into a facility's bank account quarterly, and facility funds are managed by health facility management committees (HFMCs) including community representatives. HSSF is therefore a finance mechanism with potential to increase access to funds for peripheral facilities, support user fee reduction and improve equity in access. We conducted a process evaluation of HSSF implementation based on a theory of change underpinning the intervention. Methods included interviews at national, district and facility levels, facility record reviews, a structured exit survey and a document review. We found impressive achievements: HSSF funds were reaching facilities; funds were being overseen and used in a way that strengthened transparency and community involvement; and health workers' motivation and patient satisfaction improved. Challenges or unintended outcomes included: complex and centralized accounting requirements undermining efficiency; interactions between HSSF and user fees leading to difficulties in accessing crucial user fee funds; and some relationship problems between key players. Although user fees charged had not increased, national reduction policies were still not being adhered to. Finance mechanisms can have a strong positive impact on peripheral facilities, and HFMCs can play a valuable role in managing facilities. Although fiduciary oversight is essential, mechanisms should allow for local decision-making and ensure that unmanageable paperwork is avoided. There are also limits to what can be achieved with relatively small funds in contexts of enormous need. Process evaluations tracking (un)intended consequences of interventions can contribute to regional financing and decentralization debates. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
EPA Facility Registry Service (FRS): CAMDBS
This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Clean Air Markets Division Business System (CAMDBS). Administered by the EPA Clean Air Markets Division, within the Office of Air and Radiation, CAMDBS supports the implementation of market-based air pollution control programs, including the Acid Rain Program and regional programs designed to reduce the transport of ozone. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to CAMDBS facilities once the CAMDBS data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
MODIS Information, Data, and Control System (MIDACS) system specifications and conceptual design
NASA Technical Reports Server (NTRS)
Han, D.; Salomonson, V.; Ormsby, J.; Ardanuy, P.; Mckay, A.; Hoyt, D.; Jaffin, S.; Vallette, B.; Sharts, B.; Folta, D.
1988-01-01
The MODIS Information, Data, and Control System (MIDACS) Specifications and Conceptual Design Document discusses system level requirements, the overall operating environment in which requirements must be met, and a breakdown of MIDACS into component subsystems, which include the Instrument Support Terminal, the Instrument Control Center, the Team Member Computing Facility, the Central Data Handling Facility, and the Data Archive and Distribution System. The specifications include sizing estimates for the processing and storage capacities of each data system element, as well as traffic analyses of data flows between the elements internally, and also externally across the data system interfaces. The specifications for the data system, as well as for the individual planning and scheduling, control and monitoring, data acquisition and processing, calibration and validation, and data archive and distribution components, do not yet fully specify the data system in the complete manner needed to achieve the scientific objectives of the MODIS instruments and science teams. The teams have not yet been formed; however, it was possible to develop the specifications and conceptual design based on the present concept of EosDIS, the Level-1 and Level-2 Functional Requirements Documents, the Operations Concept, and through interviews and meetings with key members of the scientific community.
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
Communication network for decentralized remote tele-science during the Spacelab mission IML-2
NASA Technical Reports Server (NTRS)
Christ, Uwe; Schulz, Klaus-Juergen; Incollingo, Marco
1994-01-01
The ESA communication network for decentralized remote telescience during the Spacelab mission IML-2, called Interconnection Ground Subnetwork (IGS), provided data, voice conferencing, video distribution/conferencing and high rate data services to 5 remote user centers in Europe. The combination of services allowed the experimenters to interact with their experiments as they would normally do from the Payload Operations Control Center (POCC) at MSFC. In addition, to enhance their science results, they were able to make use of reference facilities and computing resources in their home laboratory, which typically are not available in the POCC. Characteristics of the IML-2 communications implementation were the adaptation to the different user needs based on modular service capabilities of IGS and the cost optimization for the connectivity. This was achieved by using a combination of traditional leased lines, satellite based VSAT connectivity and N-ISDN according to the simulation and mission schedule for each remote site. The central management system of IGS allows minimization of staffing and the involvement of communications personnel at the remote sites. The successful operation of IGS for IML-2 as a precursor network for the Columbus Orbital Facility (COF) has proven the concept for communications to support the operation of the COF decentralized scenario.
An Electronic Pressure Profile Display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
An electronic pressure profile display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
Distributed Computing with Centralized Support Works at Brigham Young.
ERIC Educational Resources Information Center
McDonald, Kelly; Stone, Brad
1992-01-01
Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…
Design of the central region in the Gustaf Werner cyclotron at the Uppsala university
NASA Astrophysics Data System (ADS)
Toprek, Dragan; Reistad, Dag; Lundstrom, Bengt; Wessman, Dan
2002-07-01
This paper describes the design of the central region in the Gustaf Werner cyclotron for h=1, 2 and 3 modes of acceleration. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.
ERIC Educational Resources Information Center
Zamora, Ramon M.
Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…
Race, Wealth, and Solid Waste Facilities in North Carolina
Norton, Jennifer M.; Wing, Steve; Lipscomb, Hester J.; Kaufman, Jay S.; Marshall, Stephen W.; Cravey, Altha J.
2007-01-01
Background Concern has been expressed in North Carolina that solid waste facilities may be disproportionately located in poor communities and in communities of color, that this represents an environmental injustice, and that solid waste facilities negatively impact the health of host communities. Objective Our goal in this study was to conduct a statewide analysis of the location of solid waste facilities in relation to community race and wealth. Methods We used census block groups to obtain racial and economic characteristics, and information on solid waste facilities was abstracted from solid waste facility permit records. We used logistic regression to compute prevalence odds ratios for 2003, and Cox regression to compute hazard ratios of facilities issued permits between 1990 and 2003. Results The adjusted prevalence odds of a solid waste facility was 2.8 times greater in block groups with ≥50% people of color compared with block groups with < 10% people of color, and 1.5 times greater in block groups with median house values < $60,000 compared with block groups with median house values ≥$100,000. Among block groups that did not have a previously permitted solid waste facility, the adjusted hazard of a new permitted facility was 2.7 times higher in block groups with ≥50% people of color compared with block groups with < 10% people of color. Conclusion Solid waste facilities present numerous public health concerns. In North Carolina solid waste facilities are disproportionately located in communities of color and low wealth. In the absence of action to promote environmental justice, the continued need for new facilities could exacerbate this environmental injustice. PMID:17805426
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. NASA Administrator Sean OKeefe (center) is welcomed to the Central Florida Research Park, near Orlando. Central Florida leaders are proposing the research park as the site for the new NASA Shared Services Center. The center would centralize NASAs payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
Space technology test facilities at the NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Rodrigues, Annette T.
1990-01-01
The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.
Mineral facilities of Northern and Central Eurasia
Baker, Michael S.; Elias, Nurudeen; Guzman, Eric; Soto-Viruet, Yadira
2010-01-01
This map displays almost 900 records of mineral facilities within the countries that formerly constituted the Union of Soviet Socialist Republics (USSR). Each record represents one commodity and one facility type at a single geographic location. Facility types include mines, oil and gas fields, and plants, such as refineries, smelters, and mills. Common commodities of interest include aluminum, cement, coal, copper, gold, iron and steel, lead, nickel, petroleum, salt, silver, and zinc. Records include attributes, such as commodity, country, location, company name, facility type and capacity (if applicable), and latitude and longitude geographical coordinates (in both degrees-minutes-seconds and decimal degrees). The data shown on this map and in table 1 were compiled from multiple sources, including (1) the most recently available data from the U.S. Geological Survey (USGS) Minerals Yearbook (Europe and Central Eurasia volume), (2) mineral statistics and information from the USGS Minerals Information Web site (http://minerals.usgs.gov/minerals/pubs/country/europe.html), and (3) data collected by the USGS minerals information country specialists from sources, such as statistical publications of individual countries, annual reports and press releases of operating companies, and trade journals. Data reflect the most recent published table of industry structure for each country at the time of this publication. Additional information is available from the country specialists listed in table 2
Operational summary of an electric propulsion long term test facility
NASA Technical Reports Server (NTRS)
Trump, G. E.; James, E. L.; Bechtel, R. T.
1982-01-01
An automated test facility capable of simultaneously operating three 2.5 kW, 30-cm mercury ion thrusters and their power processors is described, along with a test program conducted for the documentation of thruster characteristics as a function of time. Facility controls are analog, with full redundancy, so that in the event of malfunction the facility automaticcally activates a backup mode and notifies an operator. Test data are recorded by a central data collection system and processed as daily averages. The facility has operated continuously for a period of 37 months, over which nine mercury ion thrusters and four power processor units accumulated a total of over 14,500 hours of thruster operating time.
NASA Technical Reports Server (NTRS)
Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.
1991-01-01
A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.
Goetz, Matthew Bidwell; Hoang, Tuyen; Knapp, Herschel; Burgess, Jane; Fletcher, Michael D; Gifford, Allen L; Asch, Steven M
2013-10-01
Pilot data suggest that a multifaceted approach may increase HIV testing rates, but the scalability of this approach and the level of support needed for successful implementation remain unknown. To evaluate the effectiveness of a scaled-up multi-component intervention in increasing the rate of risk-based and routine HIV diagnostic testing in primary care clinics and the impact of differing levels of program support. Three arm, quasi-experimental implementation research study. Veterans Health Administration (VHA) facilities. Persons receiving primary care between June 2009 and September 2011 INTERVENTION: A multimodal program, including a real-time electronic clinical reminder to facilitate HIV testing, provider feedback reports and provider education, was implemented in Central and Local Arm Sites; sites in the Central Arm also received ongoing programmatic support. Control Arm sites had no intervention Frequency of performing HIV testing during the 6 months before and after implementation of a risk-based clinical reminder (phase I) or routine clinical reminder (phase II). The adjusted rate of risk-based testing increased by 0.4 %, 5.6 % and 10.1 % in the Control, Local and Central Arms, respectively (all comparisons, p < 0.01). During phase II, the adjusted rate of routine testing increased by 1.1 %, 6.3 % and 9.2 % in the Control, Local and Central Arms, respectively (all comparisons, p < 0.01). At study end, 70-80 % of patients had been offered an HIV test. Use of clinical reminders, provider feedback, education and social marketing significantly increased the frequency at which HIV testing is offered and performed in VHA facilities. These findings support a multimodal approach toward achieving the goal of having every American know their HIV status as a matter of routine clinical practice.
Computers in Schools: White Boys Only?
ERIC Educational Resources Information Center
Hammett, Roberta F.
1997-01-01
Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)
20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...
20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA
Meteorological annual report for 1995 at the Savannah River Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, C.H.; Tatum, C.P.
1996-12-01
The Environmental Technology Section (ETS) of the Savannah River Technology Center (SRTC) collects, archives, and analyzes basic meteorological data supporting a variety of activities at SRS. These activities include the design, construction, and operation of nuclear and non-nuclear facilities, emergency response, environmental compliance, resource management, and environmental research. This report contains tabular and graphical summaries of data collected during 1995 for temperature, precipitation, relative humidity, wind, barometric pressure, and solar radiation. Most of these data were collected at the central Climatology Facility. Summaries of temperature and relative humidity were generated with data from the lowest level of measurement at themore » Central Climatology Site tower (13 feet above ground). (Relative humidity is calculated from measurements of dew-point temperature.) Wind speed summaries were generated with data from the second measurement level (58 feet above ground). Wind speed measurements from this level are believed to best represent open, well-exposed areas of the Site. Precipitation summaries were based on data from the Building 773-A site since quality control algorithms for the central Climatology Facility rain gauge data were not finalized at the time this report was prepared. This report also contains seasonal and annual summaries of joint occurrence frequencies for selected wind speed categories by 22.5 degree wind direction sector (i.e., wind roses). Wind rose summaries are provided for the 200-foot level of the Central Climatology tower and for each of the eight 200-foot area towers.« less
Central Radar System, Over-the-Horizon Backscatter
1990-03-09
1,2-Dibromo-3- chloropropane (DBCP) 0.3 TABLE 41-6 (Continued). MINNESOTA RECOMMENDED ALLOWABLE LIMITS (RAL) FOR DRINKING WATER WELLS Compound RAL (ug/ 1 ...TABLE OF CONTENTS ENVIRONMENTAL IMPACT ANALYSIS PROCESS OVERVIEW ............ TECHNICAL STUDY 1 FACILITIES...TECHNICAL STUDY 10 0 TECHNICAL STUDY 1 CENTRAL RADAR SYSTEM OVER-THE-HORIZON BACKSCATTER RADAR PROGRAM 0 ENVIRONMENTAL IMPACT
This image, looking due south shows the central part of ...
This image, looking due south shows the central part of the north wing of the building, a 2 story facade. In the foreground are several utility chases which span this elevation of the building - Department of Energy, Mound Facility, Electronics Laboratory Building (E Building), One Mound Road, Miamisburg, Montgomery County, OH
2004-02-19
KENNEDY SPACE CENTER, FLA. - Pamella J. Dana, Ph.D., director, Office of Tourism, Trade, and Economic Development in Florida, takes part in the proposal for locating NASA’s new Shared Services Center in the Central Florida Research Park, near Orlando. The presentation was given to NASA Administrator Sean O’Keefe and other officials. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
Launch Site Computer Simulation and its Application to Processes
NASA Technical Reports Server (NTRS)
Sham, Michael D.
1995-01-01
This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
48 CFR 970.5227-1 - Rights in data-facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Computer data bases, as used in this clause, means a collection of data in a form capable of, and for the purpose of, being stored in, processed, and operated on by a computer. The term does not include computer software. (2) Computer software, as used in this clause, means (i) computer programs which are data...
23. BUILDING NO. 452, ORDNANCE FACILITY (BAG CHARGE FILLING PLANT), ...
23. BUILDING NO. 452, ORDNANCE FACILITY (BAG CHARGE FILLING PLANT), INTERIOR, LOOKING SOUTH DOWN CENTRAL CORRIDOR. NOTE BINS IN WALLS ON EITHER SIDE OF CORRIDOR, USED FOR PASSING EXPLOSIVES AND LOADED ITEMS TO SIEVING ROOMS BEYOND WALLS. - Picatinny Arsenal, 400 Area, Gun Bag Loading District, State Route 15 near I-80, Dover, Morris County, NJ
NASA Astrophysics Data System (ADS)
1980-07-01
Accomplishments are reported in the areas of: program management, system integration, the beam characterization system, receiver unit, thermal storage subsystems, master control system, plant support subsystem and engineering services. A solar facilities design integration program action items update is included. Work plan changes and cost underruns are discussed briefly. (LEW)
Facility Planning for 21st Century. Technology, Industry, and Education.
ERIC Educational Resources Information Center
Hill, Franklin
When the Orange County School Board (Orlando, Florida) decided to build a new high school, they recognized Central Florida's high technology emphasis as a special challenge. The new facility needed to meet present instructional demands while being flexible enough to incorporate 21st century technologies. The final result is a new $30 million high…
ERIC Educational Resources Information Center
Gilliland, John W.
Development of a design for a new elementary school facility is traced through evaluation of various innovative facilities. Significant features include--(1) the spiral plan form, (2) centralized core levels including teacher work center, "perception" core, and interior stream aquariam, (3) the learning laboratory classroom suites, (4) a unique…
Jonathan W. Amy and the Amy Facility for Instrumentation Development.
Cooks, R Graham
2017-05-16
This Perspective describes the unique Jonathan Amy Facility for Chemical Instrumentation in the Department of Chemistry at Purdue University, tracing its history and mode of operation. It also describes aspects of the career of its namesake and some of his insights which have been central to analytical instrumentation development, improvement, and utilization, both at Purdue and nationally.
STATE OF NEW YORK STANDARD PLAN TYPE A-1, ONE-STORY 14-21 CLASSROOM ELEMENTARY SCHOOL.
ERIC Educational Resources Information Center
King and King, Syracuse, NY.
THE PROGRAM FOR AN ELEMENTARY SCHOOL FACILITY REQUIRED 14 CLASSROOMS WITH THE POTENTIAL FOR ACCOMMODATING AN INCREASE OF SEVEN CLASSROOMS. THE EXPANSION POTENTIAL ALSO INVOLVED ADDITION OF A CONSIDERABLE NUMBER OF NON-TEACHING AREAS. THE DESIGN FEATURED A CENTRAL CORE CONTAINING ADMINISTRATION, PLAYROOM, CAFETERIA, AND KITCHEN FACILITIES WITH TWO…
Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio ...
Naval Computer & Telecommunications Area Master Station, Eastern Pacific, Radio Transmitter Facility Lualualei, Marine Barracks, Intersection of Tower Drive & Morse Street, Makaha, Honolulu County, HI
Human resources for refraction services in Central Nepal.
Kandel, Himal; Murthy, G V S; Bascaran, Covadonga
2015-07-01
Uncorrected refractive error is a public health problem globally and in Nepal. Planning of refraction services is hampered by a paucity of data. This study was conducted to determine availability and distribution of human resources for refraction, their efficiency, the type and extent of their training; the current service provision of refraction services and the unmet need in human resources for refraction in Central Nepal. This was a descriptive cross-sectional study. All refraction facilities in the Central Region were identified through an Internet search and interviews of key informants from the professional bodies and parent organisations of primary eye centres. A stratified simple random sampling technique was used to select 50 per cent of refraction facilities. The selected facilities were visited for primary data collection. Face-to-face interviews were conducted with the managers and the refractionists available in the facilities using a semi-structured questionnaire. Data was collected in 29 centres. All the managers (n=29; response rate 100 per cent) and 50 refractionists (Response rate 65.8 per cent) were interviewed. Optometrists and ophthalmic assistants were the main providers of refraction services (n=70, 92.11 per cent). They were unevenly distributed across the region, highly concentrated around urban areas. The median number of refractions per refractionist per year was 3,600 (IQR: 2,400 - 6,000). Interviewed refractionists stated that clients' knowledge, attitude and practice related factors such as lack of awareness of the need for refraction services and/or availability of existing services were the major barriers to the output of refraction services. The total number of refractions carried out in the Central Region per year was 653,176. An additional 170 refractionists would be needed to meet the unmet need of 1,323,234 refractions. The study findings demand a major effort to develop appropriately trained personnel when planning refraction services in the Central Region and in Nepal as a whole. The equitable distribution of the refractionists, their community-outreach services and awareness raising activities should be emphasised. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
EPA Facility Registry System (FRS): NCES
This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Center for Education Statistics (NCES). The primary federal database for collecting and analyzing data related to education in the United States and other Nations, NCES is located in the U.S. Department of Education, within the Institute of Education Sciences. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA00e2??s national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to NCES school facilities once the NCES data has been integrated into the FRS database. Additional information on FRS is available at the EPA website http://www.epa.gov/enviro/html/fii/index.html.
Bender, Désirée; Hollstein, Tina; Schweppe, Cornelia
2017-12-01
This paper presents findings from an ethnographic study of old age care facilities for German-speaking people in Thailand. It analyses the conditions and processes behind the development and specific designs of such facilities. It first looks at the intertwinement, at the socio-structural level, of different transborder developments in which the facilities' emergence is embedded. Second, it analyses the processes that accompany the emergence, development and organisation of these facilities at the local level. In this regard, it points out the central role of the facility operators as transnational actors who mediate between different frames of reference and groups of actors involved in these facilities. It concludes that the processes of mediation and intertwining are an important and distinctive feature of the emergence of these facilities, necessitated by the fact that, although the facilities are located in Thailand, their 'markets' are in the German-speaking countries of their target groups.
Logistics in the Computer Lab.
ERIC Educational Resources Information Center
Cowles, Jim
1989-01-01
Discusses ways to provide good computer laboratory facilities for elementary and secondary schools. Topics discussed include establishing the computer lab and selecting hardware; types of software; physical layout of the room; printers; networking possibilities; considerations relating to the physical environment; and scheduling methods. (LRW)
Computer-Aided Engineering Education at the K.U. Leuven.
ERIC Educational Resources Information Center
Snoeys, R.; Gobin, R.
1987-01-01
Describes some recent initiatives and developments in the computer-aided design program in the engineering faculty of the Katholieke Universiteit Leuven (Belgium). Provides a survey of the engineering curriculum, the computer facilities, and the main software packages available. (TW)
76 FR 59803 - Children's Online Privacy Protection Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...
Use of the Decision Support System for VA cost-effectiveness research.
Barnett, P G; Rodgers, J H
1999-04-01
The Department of Veterans Affairs is adopting the Decision Support System (DSS), computer software and databases which include a cost-accounting system which determines the cost of health care products and patient encounters. A system for providing cost data for cost-effectiveness analysis should be provide valid, detailed, and comprehensive data that can be aggregated. The design of DSS is described and compared with those criteria. Utilization data from DSS was compared with other VA utilization data. Aggregate DSS cost data from 35 medical centers was compared with relative resource weights developed for the Medicare program. Data on hospital stays at 3 facilities found that 3.7% of the stays in DSS were not in the VA discharge database, whereas 7.6% of the stays in the discharge data were not in DSS. DSS reported between 68.8% and 97.1% of the outpatient encounters reported by six facilities in the ambulatory care data base. Relative weights for each Diagnosis Related Group based on DSS data from 35 VA facilities correlated with Medicare weights (correlation coefficient of .853). DSS will be useful for research if certain problems are overcome. It is difficult to distinguish long-term from acute hospital care. VA does not have a complete database of all inpatient procedures, so DSS has not assigned them a specific cost. The authority to access encounter-level DSS data needs to be centralized. Researchers can provide the feedback needed to improve DSS cost estimates. A comprehensive encounter-level extract would facilitate use of DSS for research.
Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing
NASA Astrophysics Data System (ADS)
Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.
2015-05-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.
Influence of Computer-Aided Detection on Performance of Screening Mammography
Fenton, Joshua J.; Taplin, Stephen H.; Carney, Patricia A.; Abraham, Linn; Sickles, Edward A.; D'Orsi, Carl; Berns, Eric A.; Cutter, Gary; Hendrick, R. Edward; Barlow, William E.; Elmore, Joann G.
2011-01-01
Background Computer-aided detection identifies suspicious findings on mammograms to assist radiologists. Since the Food and Drug Administration approved the technology in 1998, it has been disseminated into practice, but its effect on the accuracy of interpretation is unclear. Methods We determined the association between the use of computer-aided detection at mammography facilities and the performance of screening mammography from 1998 through 2002 at 43 facilities in three states. We had complete data for 222,135 women (a total of 429,345 mammograms), including 2351 women who received a diagnosis of breast cancer within 1 year after screening. We calculated the specificity, sensitivity, and positive predictive value of screening mammography with and without computer-aided detection, as well as the rates of biopsy and breast-cancer detection and the overall accuracy, measured as the area under the receiver-operating-characteristic (ROC) curve. Results Seven facilities (16%) implemented computer-aided detection during the study period. Diagnostic specificity decreased from 90.2% before implementation to 87.2% after implementation (P<0.001), the positive predictive value decreased from 4.1% to 3.2% (P = 0.01), and the rate of biopsy increased by 19.7% (P<0.001). The increase in sensitivity from 80.4% before implementation of computer-aided detection to 84.0% after implementation was not significant (P = 0.32). The change in the cancer-detection rate (including invasive breast cancers and ductal carcinomas in situ) was not significant (4.15 cases per 1000 screening mammograms before implementation and 4.20 cases after implementation, P = 0.90). Analyses of data from all 43 facilities showed that the use of computer-aided detection was associated with significantly lower overall accuracy than was nonuse (area under the ROC curve, 0.871 vs. 0.919; P = 0.005). Conclusions The use of computer-aided detection is associated with reduced accuracy of interpretation of screening mammograms. The increased rate of biopsy with the use of computer-aided detection is not clearly associated with improved detection of invasive breast cancer. PMID:17409321
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Economics of Computing: The Case of Centralized Network File Servers.
ERIC Educational Resources Information Center
Solomon, Martin B.
1994-01-01
Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…
51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON ...
51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON NORTH WALL OF TELEMETRY ROOM (ROOM 106). SLC-3W CONTROL ROOM IS VISIBLE IN BACKGROUND THROUGH WINDOW IN NORTH WALL. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
A Computer Program for Training Eccentric Reading in Persons with Central Scotoma
ERIC Educational Resources Information Center
Kasten, Erich; Haschke, Peggy; Meinhold, Ulrike; Oertel-Verweyen, Petra
2010-01-01
This article explores the effectiveness of a computer program--Xcentric viewing--for training eccentric reading in persons with central scotoma. The authors conducted a small study to investigate whether this program increases the reading capacities of individuals with age-related macular degeneration (AMD). Instead of a control group, they…
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1996-01-01
A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.
Core commands across airway facilities systems.
DOT National Transportation Integrated Search
2003-05-01
This study takes a high-level approach to evaluate computer systems without regard to the specific method of : interaction. This document analyzes the commands that Airway Facilities (AF) use across different systems and : the meanings attributed to ...
Waweru, Evelyn; Goodman, Catherine; Kedenge, Sarah; Tsofa, Benjamin; Molyneux, Sassy
2016-01-01
In many African countries, user fees have failed to achieve intended access and quality of care improvements. Subsequent user fee reduction or elimination policies have often been poorly planned, without alternative sources of income for facilities. We describe early implementation of an innovative national health financing intervention in Kenya; the health sector services fund (HSSF). In HSSF, central funds are credited directly into a facility’s bank account quarterly, and facility funds are managed by health facility management committees (HFMCs) including community representatives. HSSF is therefore a finance mechanism with potential to increase access to funds for peripheral facilities, support user fee reduction and improve equity in access. We conducted a process evaluation of HSSF implementation based on a theory of change underpinning the intervention. Methods included interviews at national, district and facility levels, facility record reviews, a structured exit survey and a document review. We found impressive achievements: HSSF funds were reaching facilities; funds were being overseen and used in a way that strengthened transparency and community involvement; and health workers’ motivation and patient satisfaction improved. Challenges or unintended outcomes included: complex and centralized accounting requirements undermining efficiency; interactions between HSSF and user fees leading to difficulties in accessing crucial user fee funds; and some relationship problems between key players. Although user fees charged had not increased, national reduction policies were still not being adhered to. Finance mechanisms can have a strong positive impact on peripheral facilities, and HFMCs can play a valuable role in managing facilities. Although fiduciary oversight is essential, mechanisms should allow for local decision-making and ensure that unmanageable paperwork is avoided. There are also limits to what can be achieved with relatively small funds in contexts of enormous need. Process evaluations tracking (un)intended consequences of interventions can contribute to regional financing and decentralization debates. PMID:25920355
2004-02-19
KENNEDY SPACE CENTER, FLA. - NASA officials and government representatives are gathered to learn about the assets of the Central Florida Research Park, near Orlando. At the far end of the table is NASA Administrator Sean O’Keefe. He is flanked, on the left, by Florida Congressman Tom Feeney and U.S. Senator Bill Nelson; and on the right by U.S. Congressman Dave Weldon. Central Florida leaders are proposing the research park as the site for the NASA Shared Services Center. The center would centralize NASA’s payroll, accounting, human resources, facilities and procurement offices that are now handled at each field center. The consolidation is part of the One NASA focus. Six sites around the U.S. are under consideration by NASA.
Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.
2015-05-01
The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.
Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, AD; Page, Christina; Lytle, Bob
The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less
Design of the central region in the Warsaw K-160 cyclotron
NASA Astrophysics Data System (ADS)
Toprek, Dragan; Sura, Josef; Choinski, Jaroslav; Czosnyka, Tomas
2001-08-01
This paper describes the design of the central region for h=2 and 3 modes of acceleration in the Warsaw K-160 cyclotron. The central region is unique and compatible with the two above-mentioned harmonic modes of operation. Only one spiral type inflector will be used. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2007-03-14
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period October 1 through December 31, 2006, for the fixed and mobile sites. Although the AMF is currently up and running in Niamey, Niger, Africa, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The first quarter comprises a total of 2,208 hours. For all fixed sites, the actual data availability (and therefore actual hours of operation) exceeded the individual (and well as aggregate average of the fixed sites) operational goal for the first quarter of fiscal year (FY) 2007. The Site Access Request System is a web-based database used to track visitors to the fixed sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a Central Facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. NIM represents the AMF statistics for the current deployment in Niamey, Niger, Africa. PYE represents the AMF statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request an account on the local site data system. The eight research computers are located at the Barrow and Atqasuk sites; the SGP Central Facility; the TWP Manus, Nauru, and Darwin sites; the DMF at PNNL; and the AMF in Niger. This report provides the cumulative numbers of visitors and user accounts by site for the period January 1, 2006 - December 31, 2006. The U.S. Department of Energy requires national user facilities to report facility use by total visitor days-broken down by institution type, gender, race, citizenship, visitor role, visit purpose, and facility-for actual visitors and for active user research computer accounts. During this reporting period, the ACRF Archive did not collect data on user characteristics in this way. Work is under way to collect and report these data. Table 2 shows the summary of cumulative users for the period January 1, 2006 - December 31, 2006. For the first quarter of FY 2007, the overall number of users is up from the last reporting period. The historical data show that there is an apparent relationship between the total number of users and the 'size' of field campaigns, called Intensive Operation Periods (IOPs): larger IOPs draw more of the site facility resources, which are reflected by the number of site visits and site visit days, research accounts, and device accounts. These types of users typically collect and analyze data in near-real time for a site-specific IOP that is in progress. However, the Archive accounts represent persistent (year-to-year) ACRF data users that often mine from the entire collection of ACRF data, which mostly includes routine data from the fixed and mobile sites, as well as cumulative IOP data sets. Archive data users continue to show a steady growth, which is independent of the size of IOPs. For this quarter, the number of Archive data user accounts was 961, the highest since record-keeping began. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Although the AMF is not officially collecting data this quarter, personnel are regularly involved with teardown, packing, hipping, unpacking, setup, and maintenance activities, so they are included in the safety statistics. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period October 1 - December 31, 2006. There were no recordable or lost workdays or incidents for the first quarter of FY 2007.« less
Cellular Automata-Based Application for Driver Assistance in Indoor Parking Areas.
Caballero-Gil, Cándido; Caballero-Gil, Pino; Molina-Gil, Jezabel
2016-11-15
This work proposes an adaptive recommendation mechanism for smart parking that takes advantage of the popularity of smartphones and the rise of the Internet of Things. The proposal includes a centralized system to forecast available indoor parking spaces, and a low-cost mobile application to obtain data of actual and predicted parking occupancy. The described scheme uses data from both sources bidirectionally so that the centralized forecast system is fed with data obtained with the distributed system based on smartphones, and vice versa. The mobile application uses different wireless technologies to provide the forecast system with actual parking data and receive from the system useful recommendations about where to park. Thus, the proposal can be used by any driver to easily find available parking spaces in indoor facilities. The client software developed for smartphones is a lightweight Android application that supplies precise indoor positioning systems based on Quick Response codes or Near Field Communication tags, and semi-precise indoor positioning systems based on Bluetooth Low Energy beacons. The performance of the proposed approach has been evaluated by conducting computer simulations and real experimentation with a preliminary implementation. The results have shown the strengths of the proposal in the reduction of the time and energy costs to find available parking spaces.
Cellular Automata-Based Application for Driver Assistance in Indoor Parking Areas †
Caballero-Gil, Cándido; Caballero-Gil, Pino; Molina-Gil, Jezabel
2016-01-01
This work proposes an adaptive recommendation mechanism for smart parking that takes advantage of the popularity of smartphones and the rise of the Internet of Things. The proposal includes a centralized system to forecast available indoor parking spaces, and a low-cost mobile application to obtain data of actual and predicted parking occupancy. The described scheme uses data from both sources bidirectionally so that the centralized forecast system is fed with data obtained with the distributed system based on smartphones, and vice versa. The mobile application uses different wireless technologies to provide the forecast system with actual parking data and receive from the system useful recommendations about where to park. Thus, the proposal can be used by any driver to easily find available parking spaces in indoor facilities. The client software developed for smartphones is a lightweight Android application that supplies precise indoor positioning systems based on Quick Response codes or Near Field Communication tags, and semi-precise indoor positioning systems based on Bluetooth Low Energy beacons. The performance of the proposed approach has been evaluated by conducting computer simulations and real experimentation with a preliminary implementation. The results have shown the strengths of the proposal in the reduction of the time and energy costs to find available parking spaces. PMID:27854282
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Smith, William L., Jr.; Garber, Donald P.; Ayers, J. Kirk; Doelling, David R.
1995-01-01
This document describes the initial formulation (Version 1.0.0) of the Atmospheric Radiation Measurement (ARM) program satellite data analysis procedures. Techniques are presented for calibrating geostationary satellite data with Sun synchronous satellite radiances and for converting narrowband radiances to top-of-the-atmosphere fluxes and albedos. A methodology is documented for combining geostationary visible and infrared radiances with surface-based temperature observations to derive cloud amount, optical depth, height, thickness, temperature, and albedo. The analysis is limited to two grids centered over the ARM Southern Great Plains central facility in north-central Oklahoma. Daytime data taken during 5 Apr. - 1 May 1994, were analyzed on the 0.3 deg and 0.5 deg latitude-longitude grids that cover areas of 0.9 deg x 0.9 deg and 10 deg x 14 deg, respectively. Conditions ranging from scattered low cumulus to thin cirrus and thick cumulonimbus occurred during the study period. Detailed comparisons with hourly surface observations indicate that the mean cloudiness is within a few percent of the surface-derived sky cover. Formats of the results are also provided. The data can be accessed through the World Wide Web computer network.
EPA Facility Registry Service (FRS): TRI
This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Toxic Release Inventory (TRI) System. TRI is a publicly available EPA database reported annually by certain covered industry groups, as well as federal facilities. It contains information about more than 650 toxic chemicals that are being used, manufactured, treated, transported, or released into the environment, and includes information about waste management and pollution prevention activities. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to TRI facilities once the TRI data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.
A distributed data base management facility for the CAD/CAM environment
NASA Technical Reports Server (NTRS)
Balza, R. M.; Beaudet, R. W.; Johnson, H. R.
1984-01-01
Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.
1980-06-05
N-231 High Reynolds Number Channel Facility (An example of a Versatile Wind Tunnel) Tunnel 1 I is a blowdown Facility that utilizes interchangeable test sections and nozzles. The facility provides experimental support for the fluid mechanics research, including experimental verification of aerodynamic computer codes and boundary-layer and airfoil studies that require high Reynolds number simulation. (Tunnel 1)
Evaluation of Visual Computer Simulator for Computer Architecture Education
ERIC Educational Resources Information Center
Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio
2013-01-01
This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…
Hybrid Computation at Louisiana State University.
ERIC Educational Resources Information Center
Corripio, Armando B.
Hybrid computation facilities have been in operation at Louisiana State University since the spring of 1969. In part, they consist of an Electronics Associates, Inc. (EAI) Model 680 analog computer, an EAI Model 693 interface, and a Xerox Data Systems (XDS) Sigma 5 digital computer. The hybrid laboratory is used in a course on hybrid computation…
Computer Augmented Video Education.
ERIC Educational Resources Information Center
Sousa, M. B.
1979-01-01
Describes project CAVE (Computer Augmented Video Education), an ongoing effort at the U.S. Naval Academy to present lecture material on videocassette tape, reinforced by drill and practice through an interactive computer system supported by a 12 channel closed circuit television distribution and production facility. (RAO)
Short Pulse Laser Applications Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Town, R J; Clark, D S; Kemp, A J
We are applying our recently developed, LDRD-funded computational simulation tool to optimize and develop applications of Fast Ignition (FI) for stockpile stewardship. This report summarizes the work performed during a one-year exploratory research LDRD to develop FI point designs for the National Ignition Facility (NIF). These results were sufficiently encouraging to propose successfully a strategic initiative LDRD to design and perform the definitive FI experiment on the NIF. Ignition experiments on the National Ignition Facility (NIF) will begin in 2010 using the central hot spot (CHS) approach, which relies on the simultaneous compression and ignition of a spherical fuel capsule.more » Unlike this approach, the fast ignition (FI) method separates fuel compression from the ignition phase. In the compression phase, a laser such as NIF is used to implode a shell either directly, or by x rays generated from the hohlraum wall, to form a compact dense ({approx}300 g/cm{sup 3}) fuel mass with an areal density of {approx}3.0 g/cm{sup 2}. To ignite such a fuel assembly requires depositing {approx}20kJ into a {approx}35 {micro}m spot delivered in a short time compared to the fuel disassembly time ({approx}20ps). This energy is delivered during the ignition phase by relativistic electrons generated by the interaction of an ultra-short high-intensity laser. The main advantages of FI over the CHS approach are higher gain, a lower ignition threshold, and a relaxation of the stringent symmetry requirements required by the CHS approach. There is worldwide interest in FI and its associated science. Major experimental facilities are being constructed which will enable 'proof of principle' tests of FI in integrated subignition experiments, most notably the OMEGA-EP facility at the University of Rochester's Laboratory of Laser Energetics and the FIREX facility at Osaka University in Japan. Also, scientists in the European Union have recently proposed the construction of a new FI facility, called HiPER, designed to demonstrate FI. Our design work has focused on the NIF, which is the only facility capable of forming a full-scale hydro assembly, and could be adapted for full-scale FI by the conversion of additional beams to short-pulse operation.« less
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
Hydrocode simulations of air and water shocks for facility vulnerability assessments.
Clutter, J Keith; Stahl, Michael
2004-01-02
Hydrocodes are widely used in the study of explosive systems but their use in routine facility vulnerability assessments has been limited due to the computational resources typically required. These requirements are due to the fact that the majority of hydrocodes have been developed primarily for the simulation of weapon-scale phenomena. It is not practical to use these same numerical frameworks on the large domains found in facility vulnerability studies. Here, a hydrocode formulated specifically for facility vulnerability assessments is reviewed. Techniques used to accurately represent the explosive source while maintaining computational efficiency are described. Submodels for addressing other issues found in typical terrorist attack scenarios are presented. In terrorist attack scenarios, loads produced by shocks play an important role in vulnerability. Due to the difference in the material properties of water and air and interface phenomena, there exists significant contrast in wave propagation phenomena in these two medium. These physical variations also require special attention be paid to the mathematical and numerical models used in the hydrocodes. Simulations for a variety of air and water shock scenarios are presented to validate the computational models used in the hydrocode and highlight the phenomenological issues.
Emerging CAE technologies and their role in Future Ambient Intelligence Environments
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2011-03-01
Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.
Facilities | Computational Science | NREL
technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources
Drone Defense System Architecture for U.S. Navy Strategic Facilities
2017-09-01
evaluation and weapons assignment (TEWA) to properly address threats. This report follows a systems engineering process to develop a software architecture...C-UAS requires a central system to connect these new and existing systems. The central system uses data fusion and threat evaluation and weapons...30 Table 6. Decision Type Descriptions .......................................................................40 Table 7
Clarke Central High School: One Student at a Time
ERIC Educational Resources Information Center
Principal Leadership, 2013
2013-01-01
There is excitement in the air at Clarke Central High School in anticipation of a $28 million renovation planned on its 27-acre, urban campus located just minutes from the University of Georgia in Athens. This extensive construction aims to fulfill a board of education mandate to provide equity among the Clarke County school facilities and will…
McGuire, Megan; Pinoges, Loretxu; Kanapathipillai, Rupa; Munyenyembe, Tamika; Huckabee, Martha; Makombe, Simon; Szumilin, Elisabeth; Heinzelmann, Annette; Pujades-Rodríguez, Mar
2012-01-01
Objective To describe patient antiretroviral therapy (cART) outcomes associated with intensive decentralization of services in a rural HIV program in Malawi. Methods Longitudinal analysis of data from HIV-infected patients starting cART between August 2001 and December 2008 and of a cross-sectional immunovirological assessment conducted 12 (±2) months after therapy start. One-year mortality, lost to follow-up, and attrition (deaths and lost to follow-up) rates were estimated with exact Poisson 95% confidence intervals (CI) by type of care delivery and year of initiation. Association of virological suppression (<50 copies/mL) and immunological success (CD4 gain ≥100 cells/µL), with type of care was investigated using multiple logistic regression. Results During the study period, 4322 cART patients received centralized care and 11,090 decentralized care. At therapy start, patients treated in decentralized health facilities had higher median CD4 count levels (167 vs. 130 cell/µL, P<0.0001) than other patients. Two years after cART start, program attrition was lower in decentralized than centralized facilities (9.9 per 100 person-years, 95% CI: 9.5–10.4 vs. 20.8 per 100 person-years, 95% CI: 19.7–22.0). One year after treatment start, differences in immunological success (adjusted OR = 1.23, 95% CI: 0.83–1.83), and viral suppression (adjusted OR = 0.80, 95% CI: 0.56–1.14) between patients followed at centralized and decentralized facilities were not statistically significant. Conclusions In rural Malawi, 1- and 2-year program attrition was lower in decentralized than in centralized health facilities and no statistically significant differences in one-year immunovirological outcomes were observed between the two health care levels. Longer follow-up is needed to confirm these results. PMID:23077473
Sandia National Laboratories: Locations: Kauai Test Facility
Defense Systems & Assessments About Defense Systems & Assessments Program Areas Accomplishments Foundations Bioscience Computing & Information Science Electromagnetics Engineering Science Geoscience Suppliers iSupplier Account Accounts Payable Contract Information Construction & Facilities Contract
Hu, Xiangen; Graesser, Arthur C
2004-05-01
The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.
North Korea: Back on the Terrorism List?
2010-06-29
Sweden. North Korea has called the investigation’s conclusion a “fabrication.” Pyongyang Korean Central Broadcasting Station, “DPRK NDC Spokesman’s...trainers to southern Lebanon where they instructed Hezbollah cadre in the development of extensive underground military facilities, including tunnels ...Guard” that one such North Korean-assisted facility in southern Lebanon was a sophisticated, 25-kilometer, underground tunnel with numerous assembly