Science.gov

Sample records for facility distributed computer

  1. DNET: A communications facility for distributed heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.

    1989-01-01

    This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.

  2. Integration of distributed plant process computer systems to nuclear power generation facilities

    SciTech Connect

    Bogard, T.; Finlay, K.

    1996-11-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation & control are evident from variations of design features.

  3. The Overview of the National Ignition Facility Distributed Computer Control System

    SciTech Connect

    Lagin, L J; Bettenhausen, R C; Carey, R A; Estes, C M; Fisher, J M; Krammen, J E; Reed, R K; VanArsdall, P J; Woodruff, J P

    2001-10-15

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008.

  4. Distributed computer control system in the Nova Laser Fusion Test Facility

    SciTech Connect

    Not Available

    1985-09-01

    The EE Technical Review has two purposes - to inform readers of various activities within the Electronics Engineering Department and to promote the exchange of ideas. The articles, by design, are brief summaries of EE work. The articles included in this report are as follows: Overview - Nova Control System; Centralized Computer-Based Controls for the Nova Laser Facility; Nova Pulse-Power Control System; Nova Laser Alignment Control System; Nova Beam Diagnostic System; Nova Target-Diagnostics Control System; and Nova Shot Scheduler. The 7 papers are individually abstracted.

  5. Computer security in DOE distributed computing systems

    SciTech Connect

    Hunteman, W.J.

    1990-01-01

    The modernization of DOE facilities amid limited funding is creating pressure on DOE facilities to find innovative approaches to their daily activities. Distributed computing systems are becoming cost-effective solutions to improved productivity. This paper defines and describes typical distributed computing systems in the DOE. The special computer security problems present in distributed computing systems are identified and compared with traditional computer systems. The existing DOE computer security policy supports only basic networks and traditional computer systems and does not address distributed computing systems. A review of the existing policy requirements is followed by an analysis of the policy as it applies to distributed computing systems. Suggested changes in the DOE computer security policy are identified and discussed. The long lead time in updating DOE policy will require guidelines for applying the existing policy to distributed systems. Some possible interim approaches are identified and discussed. 2 refs.

  6. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  7. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  8. AMRITA -- A computational facility

    SciTech Connect

    Shepherd, J.E.; Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  9. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  10. Distributed computing systems programme

    SciTech Connect

    Duce, D.

    1984-01-01

    Publication of this volume coincides with the completion of the U.K. Science and Engineering Research Council's coordinated programme of research in Distributed Computing Systems (DCS) which ran from 1977 to 1984. The volume is based on presentations made at the programme's final conference. The first chapter explains the origins and history of DCS and gives an overview of the programme and its achievements. The remaining sixteen chapters review particular research themes (including imperative and declarative languages, and performance modelling), and describe particular research projects in technical areas including local area networks, design, development and analysis of concurrent systems, parallel algorithm design, functional programming and non-von Neumann computer architectures.

  11. Quantum computing Hyper Terahertz Facility opens

    NASA Astrophysics Data System (ADS)

    Singh Chadha, Kulvinder

    2016-01-01

    A new facility has opened at the University of Surrey to use terahertz radiation for quantum computing. The Hyper Terahertz Facility (HTF) is a joint collaboration between the University of Surrey and the National Physical Laboratory (NPL).

  12. 2014 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  13. 2015 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  14. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  15. America's most computer advanced healthcare facilities.

    PubMed

    1993-02-01

    Healthcare Informatics polled industry experts for nominations for this listing of America's Most Computer-Advanced Healthcare Facilities. Nominations were reviewed for extent of departmental automation, leading-edge applications, advanced point-of-care technologies, and networking communications capabilities. Additional consideration was given to smaller facilities automated beyond "normal expectations." Facility representatives who believe their organizations should be included in our next listing, please contact Healthcare Informatics for a nomination form.

  16. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  17. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  18. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  19. BESIII production with distributed computing

    NASA Astrophysics Data System (ADS)

    Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.

  20. The Laboratory for Oceans Computing Facility

    NASA Technical Reports Server (NTRS)

    Kao, R.

    1988-01-01

    The first VAX computer in the Laboratory for Oceans Computing Facility (LOCF) was installed and the facility was largely expanded. The growth is not only in hardware and software, but also in the number of users and in supporting research and development projects. The LOCF serves as a general purpose computing facility for: ocean color research projects, sea ice research projects, processing of the Nimbus-7 Coastal Zone Color Scanner data set, real time ingest and analysis of TIROS-N satellite data, study of the Synthetic Aperture Radar data, study of LANDSAT data, and many others. The physical space and the electrical power layout of the computing room were modified to accommodate all the equipment. The LOCF has several image processing stations which include two International Imaging Systems (IIS) model 75 processors and one Adage processor. The facility has the capability of ingesting the TIROS-N HRPT satellite data on a real time basis. More than 30 software packages were installed on the systems. System software packages, network software, FORTRAN and C compilers, database management software, image processing software, graphics, mathematics and statistics packages, TAE, Catalog Manager, GEMPAK, LAS and many other software developed on the LOCF computers such as SEAPAK have greatly advanced the capability of the LOCF.

  1. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  2. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  3. Oak Ridge Leadership Computing Facility Position Paper

    SciTech Connect

    Oral, H Sarp; Hill, Jason J; Thach, Kevin G; Podhorszki, Norbert; Klasky, Scott A; Rogers, James H; Shipman, Galen M

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  4. Computer modeling of commercial refrigerated warehouse facilities

    SciTech Connect

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-07-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented.

  5. Overlapping clusters for distributed computation.

    SciTech Connect

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  6. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  7. Particle Size Distribution in Aluminum Manufacturing Facilities

    PubMed Central

    Liu, Sa; Noth, Elizabeth M.; Dixon-Ernst, Christine; Eisen, Ellen A.; Cullen, Mark R.; Hammond, S. Katharine

    2015-01-01

    As part of exposure assessment for an ongoing epidemiologic study of heart disease and fine particle exposures in aluminum industry, area particle samples were collected in production facilities to assess instrument reliability and particle size distribution at different process areas. Personal modular impactors (PMI) and Minimicro-orifice uniform deposition impactors (MiniMOUDI) were used. The coefficient of variation (CV) of co-located samples was used to evaluate the reproducibility of the samplers. PM2.5 measured by PMI was compared to PM2.5 calculated from MiniMOUDI data. Mass median aerodynamic diameter (MMAD) and concentrations of sub-micrometer (PM1.0) and quasi-ultrafine (PM0.56) particles were evaluated to characterize particle size distribution. Most of CVs were less than 30%. The slope of the linear regression of PMI_PM2.5 versus MiniMOUDI_PM2.5 was 1.03 mg/m3 per mg/m3 (± 0.05), with correlation coefficient of 0.97 (± 0.01). Particle size distribution varied substantively in smelters, whereas it was less variable in fabrication units with significantly smaller MMADs (arithmetic mean of MMADs: 2.59 μm in smelters vs. 1.31 μm in fabrication units, p = 0.001). Although the total particle concentration was more than two times higher in the smelters than in the fabrication units, the fraction of PM10 which was PM1.0 or PM0.56 was significantly lower in the smelters than in the fabrication units (p < 0.001). Consequently, the concentrations of sub-micrometer and quasi-ultrafine particles were similar in these two types of facilities. It would appear, studies evaluating ultrafine particle exposure in aluminum industry should focus on not only the smelters, but also the fabrication facilities. PMID:26478760

  8. Cooperative Fault Tolerant Distributed Computing

    SciTech Connect

    Fagg, Graham E.

    2006-03-15

    HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.

  9. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  10. Computational analysis of irradiation facilities at the JSI TRIGA reactor.

    PubMed

    Snoj, Luka; Zerovnik, Gašper; Trkov, Andrej

    2012-03-01

    Characterization and optimization of irradiation facilities in a research reactor is important for optimal performance. Nowadays this is commonly done with advanced Monte Carlo neutron transport computer codes such as MCNP. However, the computational model in such calculations should be verified and validated with experiments. In the paper we describe the irradiation facilities at the JSI TRIGA reactor and demonstrate their computational characterization to support experimental campaigns by providing information on the characteristics of the irradiation facilities. PMID:22154389

  11. National Ignition Facility integrated computer control system

    SciTech Connect

    Van Arsdall, P.J., LLNL

    1998-06-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  12. The ATLAS computing model & distributed computing evolution

    NASA Astrophysics Data System (ADS)

    Jones, Roger W. L.; Atlas Collaboration

    2012-12-01

    Despite only a brief availability of beam-related data, the typical usage patterns and operational requirements of the ATLAS computing model have been exercised, and the model as originally constructed remains remarkably unchanged. Resource requirements have been revised, and cosmic ray running has exercised much of the model in both duration and volume. The operational model has been adapted in several ways to increase performance and meet the asdelivered functionality of the available middleware. There are also changes reflecting the emerging roles of the different data formats. The model continues to evolve with a heightened focus on end-user performance; the key tools developed in the operational system are outlined, with an emphasis on those under recent development.

  13. Computer Profile of School Facilities Energy Consumption.

    ERIC Educational Resources Information Center

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  14. The Feasibility of Replacing Existing Central Computers with A Single Computer Facility.

    ERIC Educational Resources Information Center

    Richey, R. Wayne

    1979-01-01

    The feasibility of replacing existing central processing units with a single computer facility is discussed. An analysis of the computing facilities for the Iowa state universities is presented and supports the retention of decentralized facilities. Efficiency, costs, and operating considerations are examined. (SF)

  15. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  16. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  17. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C; Hudson, Douglas L

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of

  18. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  19. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis

    PubMed Central

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  20. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis.

    PubMed

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  1. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.

  2. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  3. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  4. National remote computational flight research facility

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.

    1989-01-01

    The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.

  5. Facilities Management via Computer: Information at Your Fingertips.

    ERIC Educational Resources Information Center

    Hensey, Susan

    1996-01-01

    Computer-aided facilities management is a software program consisting of a relational database of facility information--such as occupancy, usage, student counts, etc.--attached to or merged with computerized floor plans. This program can integrate data with drawings, thereby allowing the development of "what if" scenarios. (MLF)

  6. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC`s perspective was ``to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.`` This translated into evaluating how easy it was to port ELROS over CRI`s ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC`s side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI`s goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  7. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC's perspective was to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.'' This translated into evaluating how easy it was to port ELROS over CRI's ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC's side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI's goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  8. Distributed computing support program`s databases

    SciTech Connect

    Parsons, Amy

    1996-05-01

    The Distributed Computing Support Program (DCSP) is the current system for keeping track of computer hardware maintenance throughout the Lawrence Livermore National Laboratory. DCSP consists of four separate Ingres databases each with their own support files. The process of updating and revising the support files, to make the business process more efficient is described in this paper.

  9. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  10. GRIMD: distributed computing for chemists and biologists

    PubMed Central

    Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe

    2014-01-01

    Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326

  11. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  12. Distributed visualization for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Sosoka, Don J.; Facca, Anthony A.

    1992-01-01

    Distributed concurrent visualization and computation in computational fluid dynamics (CFD) is not a new concept. Specialized applications such as Realtime Interactive Particle-tracer (RIP) and vendor specific tools like Distributed Graphics Language (DGL) have been in use for some time. This paper describes a current project underway at NASA Lewis Research Center to provide the CFD researcher with an easy method for incorporating distributed processing concepts into program development. Details on the FORTRAN capable interface to a set of network and visualization functions are presented along with some results from initial CFD case studies that employ these techniques.

  13. Distributed visualization for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Sosoka, Don J.; Facca, Anthony A.

    1992-02-01

    Distributed concurrent visualization and computation in computational fluid dynamics (CFD) is not a new concept. Specialized applications such as Realtime Interactive Particle-tracer (RIP) and vendor specific tools like Distributed Graphics Language (DGL) have been in use for some time. This paper describes a current project underway at NASA Lewis Research Center to provide the CFD researcher with an easy method for incorporating distributed processing concepts into program development. Details on the FORTRAN capable interface to a set of network and visualization functions are presented along with some results from initial CFD case studies that employ these techniques.

  14. Status of the National Ignition Facility Integrated Computer Control System

    SciTech Connect

    Lagin, L; Bryant, R; Carey, R; Casavant, D; Edwards, O; Ferguson, W; Krammen, J; Larson, D; Lee, A; Ludwigsen, P; Miller, M; Moses, E; Nyholm, R; Reed, R; Shelton, R; Van Arsdall, P J; Wuest, C

    2003-10-13

    The National Ignition Facility (NIF), currently under construction at the Lawrence Livermore National Laboratory, is a stadium-sized facility containing a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for nearly 100 experimental diagnostics. When completed, NIF will be the world's largest and most energetic laser experimental system, providing an international center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. NIF's 192 energetic laser beams will compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. Laser hardware is modularized into line replaceable units such as deformable mirrors, amplifiers, and multi-function sensor packages that are operated by the Integrated Computer Control System (ICCS). ICCS is a layered architecture of 300 front-end processors attached to nearly 60,000 control points and coordinated by supervisor subsystems in the main control room. The functional subsystems--beam control including automatic beam alignment and wavefront correction, laser pulse generation and pre-amplification, diagnostics, pulse power, and timing--implement automated shot control, archive data, and support the actions of fourteen operators at graphic consoles. Object-oriented software development uses a mixed language environment of Ada (for functional controls) and Java (for user interface and database backend). The ICCS distributed software framework uses CORBA to communicate between languages and processors. ICCS software is approximately 3/4 complete with over 750 thousand source lines of code having undergone off-line verification tests and deployed to the facility. NIF has entered the first phases of its laser commissioning program. NIF has now demonstrated the highest energy 1{omega}, 2{omega}, and 3{omega} beamlines in the world. NIF

  15. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  16. The impact of distributed computing on education

    NASA Technical Reports Server (NTRS)

    Utku, S.; Lestingi, J.; Salama, M.

    1982-01-01

    In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.

  17. Distributed computing testbed for a remote experimental environment

    SciTech Connect

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.; Greenwood, D.E.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.

  18. Biomedical computing facility interface design plan

    NASA Technical Reports Server (NTRS)

    Puckett, R. D.

    1971-01-01

    The results are presented of a design study performed to establish overall system interface requirements for the Biomedical Laboratories Division's Sigma-3 computer system. Emphasis has been placed upon the definition of an overall implementation plan and associated schedule to meet both near-term and long-range requirements within the constraints at available resources.

  19. Distribution of Corbicula fluminea at nuclear facilities

    SciTech Connect

    Counts, C.L. III

    1985-11-01

    A review of the zoogeographic records for the exotic Asian clam, Corbicula fluminea (Muller, 1774), reveals its presence in 27 states where nuclear powered electric generating plants are either operating or under construction. Nineteen plant sites reported infestation of varying severity in facilities, or source water bodies immediately adjacent to the facility, by C. fluminea. Thirteen plant sites are located within the zoogeographic limits of C. fluminea but have a low risk of infestation due to either salt water cooling systems or locations a great distance from known populations. Eighteen plant sites are located wholly outside of the known zoogeographic range of C. fluminea. Thirty plant sites are located in close proximity to known populations of C. fluminea and therefore should maintain surveillance of the source water body and within plant water systems for possible infestations by these bivalves. 27 figs.

  20. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  1. Review of Test Facilities for Distributed Energy Resources

    SciTech Connect

    AKHIL,ABBAS ALI; MARNAY,CHRIS; KIPMAN,TIMOTHY

    2003-05-01

    Since initiating research on integration of distributed energy resources (DER) in 1999, the Consortium for Electric Reliability Technology Solutions (CERTS) has been actively assessing and reviewing existing DER test facilities for possible demonstrations of advanced DER system integration concepts. This report is a compendium of information collected by the CERTS team on DER test facilities during this period.

  2. Science Computing Facility for ICESat/GLAS Data

    NASA Astrophysics Data System (ADS)

    Barbieri, K. A.; Brenner, A. C.; Dimarzio, J. P.; Fiegles, S. L.; Sidel, T.; Zwally, H. J.

    2003-12-01

    The Science Computing Facility (SCF) for ICESat/GLAS data consists of a data base management system for temporal and geographic subsetting of GLAS products, autonomous tools for electronic distribution to geographically-distributed remote sites, and a data analysis and visualization package. The SCF also maintains a website that conveys instrument state and GLAS product information, displays browse products of all GLAS products produced to date, and hosts a bulletin board for communicating product and tool problems. The 15 GLAS science and engineering data products produced daily by the ICESat Science Investigator-led Processing System (I-SIPS) are distributed electronically by the SCF to GLAS Science Team members and their associates at nine remote sites, as requested. Interactive Graphical User Interfaces easily enable the remote users to subscribe to customized data sets from the standard products by defining the product, temporal spans, and geographic regions. As data are received from the I-SIPS, the individual subscriptions are automatically filled and the customized data sets are electronically transferred to the remote users. Normally, receipt of these data sets at the remote sites occurs within hours of the I-SIPS processing. Subsetting is accomplished using a Data Management System (DMS) that quickly determines if data for a subscription are present in the newly received product and then allows fast, direct, efficient access to each second of data that falls within the subscription's temporal and geographic span, instead of reading through the files sequentially. The distribution is accomplished via secure-shell communications, a MYSql database, a series of Perl and Tcl scripts, and a collection of IDL and Fortran 90 software.

  3. ATLAS distributed computing: experience and evolution

    NASA Astrophysics Data System (ADS)

    Nairz, A.; Atlas Collaboration

    2014-06-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future.

  4. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  5. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  6. ATLAS Distributed Computing in LHC Run2

    NASA Astrophysics Data System (ADS)

    Campana, Simone

    2015-12-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

  7. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  8. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  9. Distributed Storage Systems for Data Intensive Computing

    SciTech Connect

    Vazhkudai, Sudharshan S; Butt, Ali R; Ma, Xiaosong

    2012-01-01

    In this chapter, the authors present an overview of the utility of distributed storage systems in supporting modern applications that are increasingly becoming data intensive. Their coverage of distributed storage systems is based on the requirements imposed by data intensive computing and not a mere summary of storage systems. To this end, they delve into several aspects of supporting data-intensive analysis, such as data staging, offloading, checkpointing, and end-user access to terabytes of data, and illustrate the use of novel techniques and methodologies for realizing distributed storage systems therein. The data deluge from scientific experiments, observations, and simulations is affecting all of the aforementioned day-to-day operations in data-intensive computing. Modern distributed storage systems employ techniques that can help improve application performance, alleviate I/O bandwidth bottleneck, mask failures, and improve data availability. They present key guiding principles involved in the construction of such storage systems, associated tradeoffs, design, and architecture, all with an eye toward addressing challenges of data-intensive scientific applications. They highlight the concepts involved using several case studies of state-of-the-art storage systems that are currently available in the data-intensive computing landscape.

  10. Optimal shrinking of the distribution chain: the facilities delocation decision

    NASA Astrophysics Data System (ADS)

    Bhaumik, Pradip K.

    2010-03-01

    Closure of facilities is quite common among both business firms and public sector institutions like hospitals and schools. Although the facilities location problem has been studied extensively in the literature, not much attention has been paid to the closure of facilities. Unlike the location problem, the existing facilities and the corresponding network impose additional constraints on the closure or elimination of facilities and to highlight the difference between the two, we have called this the facilities delocation problem. In this article, we study a firm with an existing distribution network with known retailer and distributor locations that needs to downsize or shrink its distribution chain due to other business reasons. However, it is not a reallocation of demand nodes among the retained distributors. An important condition stipulates that all demand nodes must continue to get their supplies from their respective current distributors except when the current source itself is delocated, and only such uprooted demand nodes will be supplied by a different but one of the retained suppliers. We first describe the delocation problem and discuss its characteristics. We formulate the delocation problem as an integer linear programming problem and demonstrate its formulation and solution on a small problem. Finally, we discuss the solution and its implications for the distribution network.

  11. Distributed computation of supremal conditionally controllable sublanguages

    NASA Astrophysics Data System (ADS)

    Komenda, Jan; Masopust, Tomáš

    2016-02-01

    In this paper, we further develop the coordination control framework for discrete-event systems with both complete and partial observations. First, a weaker sufficient condition for the computation of the supremal conditionally controllable sublanguage and conditionally normal sublanguage is presented. Then we show that this condition can be imposed by synthesising a-posteriori supervisors. The paper further generalises the previous study by considering general, non-prefix-closed languages. Moreover, we prove that for prefix-closed languages the supremal conditionally controllable sublanguage and conditionally normal sublanguage can always be computed in the distributed way without any restrictive conditions we have used in the past.

  12. Computation and Analysis of the Global Distribution of the Radioxenon Isotope 133Xe based on Emissions from Nuclear Power Plants and Radioisotope Production Facilities and its Relevance for the Verification of the Nuclear-Test-Ban Treaty

    NASA Astrophysics Data System (ADS)

    Wotawa, Gerhard; Becker, Andreas; Kalinowski, Martin; Saey, Paul; Tuma, Matthias; Zähringer, Matthias

    2010-05-01

    Monitoring of radioactive noble gases, in particular xenon isotopes, is a crucial element of the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The capability of the noble gas network, which is currently under construction, to detect signals from a nuclear explosion critically depends on the background created by other sources. Therefore, the global distribution of these isotopes based on emissions and transport patterns needs to be understood. A significant xenon background exists in the reactor regions of North America, Europe and Asia. An emission inventory of the four relevant xenon isotopes has recently been created, which specifies source terms for each power plant. As the major emitters of xenon isotopes worldwide, a few medical radioisotope production facilities have been recently identified, in particular the facilities in Chalk River (Canada), Fleurus (Belgium), Pelindaba (South Africa) and Petten (Netherlands). Emissions from these sites are expected to exceed those of the other sources by orders of magnitude. In this study, emphasis is put on 133Xe, which is the most prevalent xenon isotope. First, based on the emissions known, the resulting 133Xe concentration levels at all noble gas stations of the final CTBT verification network were calculated and found to be consistent with observations. Second, it turned out that emissions from the radioisotope facilities can explain a number of observed peaks, meaning that atmospheric transport modelling is an important tool for the categorization of measurements. Third, it became evident that Nuclear Power Plant emissions are more difficult to treat in the models, since their temporal variation is high and not generally reported. Fourth, there are indications that the assumed annual emissions may be underestimated by factors of two to ten, while the general emission patterns seem to be well understood. Finally, it became evident that 133Xe sources mainly influence the sensitivity of the

  13. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  14. BES-III distributed computing status

    NASA Astrophysics Data System (ADS)

    Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.

    2016-09-01

    The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.

  15. Efficient computations with the likelihood ratio distribution.

    PubMed

    Kruijver, Maarten

    2015-01-01

    What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.

  16. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  17. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  18. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    SciTech Connect

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  19. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  20. Cryogenic distribution for the Facility for Rare Isotope Beams

    SciTech Connect

    S. Jones, Dana Arenius, Adam Fila, P. Geutschow, Helmut Laumer, Matt Johnson, Cory S. Waltz, J. G. Weisend II

    2012-06-01

    The Facility for Rare Isotope Beams (FRIB) is a new National User Facility for nuclear science funded by the Department of Energy Office of Science and operated by Michigan State University. The FRIB accelerator linac consists of superconducting radio-frequency (SCRF) cavities operating at 2 K and SC magnets operating at 4.5 K all cooled by a large scale cryogenic refrigeration system. A major subsystem of the cryogenic system will be the distribution system whose primary components will include a distribution box, the transfer lines and the interconnect valve boxes at each cryogenic device. An overview of the conceptual design of the distribution system including engineering details, capabilities and schedule is described.

  1. Pseudo-interactive monitoring in distributed computing

    SciTech Connect

    Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  2. Pseudo-interactive monitoring in distributed computing

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2010-04-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  3. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  4. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  5. Skyline View: Efficient Distributed Subspace Skyline Computation

    NASA Astrophysics Data System (ADS)

    Kim, Jinhan; Lee, Jongwuk; Hwang, Seung-Won

    Skyline queries have gained much attention as alternative query semantics with pros (e.g.low query formulation overhead) and cons (e.g.large control over result size). To overcome the cons, subspace skyline queries have been recently studied, where users iteratively specify relevant feature subspaces on search space. However, existing works mainly focuss on centralized databases. This paper aims to extend subspace skyline computation to distributed environments such as the Web, where the most important issue is to minimize the cost of accessing vertically distributed objects. Toward this goal, we exploit prior skylines that have overlapped subspaces to the given subspace. In particular, we develop algorithms for three scenarios- when the subspace of prior skylines is superspace, subspace, or the rest. Our experimental results validate that our proposed algorithm shows significantly better performance than the state-of-the-art algorithms.

  6. Testing the CDF distributed computing framework

    SciTech Connect

    Bartsch, Valeria; Baranovski, Andrew; Belforte, Stefano; Burgon-Lyon, Morag; Garzoglio, Gabriele; Herber, Randolph; Illingworth, Robert; Kennedy, Rob; Kerzel, Ulrich; Kreymer, Art; Leslie, Matt; Loebel-Carpenter, Lauri; Lueking, Lee; Lyon, Adam; Merritt, Wyatt; Ratnikov, Fedor; Sill, Alan; St. Denis, Richard; Stonjek, Stefan; Terekhov, Igor; Trumbo, Julie; /Fermilab /Oxford U. /INFN, Trieste /Glasgow U. /Karlsruhe U. /Rutgers U., Piscataway /Texas Tech.

    2004-12-01

    A major source of CPU power for CDF (Collider Detector at Fermilab) is the CAF (Central Analysis Farm) [1] at Fermilab. The CAF is a farm of computers running Linux with access to the CDF data handling system and databases to allow CDF collaborators to run batch analysis jobs. Beside providing CPU power it has a good monitoring tool. The CAF software is a wrapper around a batch system, either fbsng [3] or condor, to submit jobs in a uniform way. So the submission to the CAF clusters inside and outside Fermilab from many computers with kerberos authentification is possible. It is mainly used to access datasets which comprise a large amount of files and analyze the data. Up to now the DCache system has been used to access the files. In autumn 2004 some of the important datasets will only be readable with the help of the data handling system SAM (Sequential Access to data via Metadata) [2]. This will be done in order to switch to using only one data handling system at Fermilab and on the remote sites. SAM has been used in run II to store, manage, deliver and track the processing of all data. It is designed to copy data to remote sites with remote analysis in mind. To prove CAF and SAM could provide the required CPU power and Data Handling, stress tests of the combined system were carried out. A second goal of CDF is to distribute computing. In 2005 50% of the computing shall be located outside of Fermilab. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) in combination with SAM. To achieve user friendliness the SAM station environment has to be common to all stations and adaptations to the environment have to be made.

  7. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  8. The Argonne Leadership Computing Facility 2010 annual report.

    SciTech Connect

    Drugan, C.

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale

  9. Uninstrumented assembly airflow testing in the Annular Flow Distribution facility

    SciTech Connect

    Kielpinski, A.L.

    1992-02-01

    During the Emergency Cooling System phase of a postulated large-break loss of coolant accident (ECS-LOCA), air enters the primary loop and is pumped down the reactor assemblies. One of the experiments performed to support the analysis of this accident was the Annular Flow Distribution (AFD) experiment, conducted in a facility built for this purpose at Babcock and Wilcox Alliance Research Center in Alliance, Ohio. As part of this experiment, a large body of airflow data were acquired in a prototypical mockup of the Mark 22 reactor assembly. This assembly was known as the AFD (or the I-AFD here) reference assembly. The I-AFD assembly was fully prototypical, having been manufactured in SRS`s production fabrication facility. Similar Mark 22 mockup assemblies were tested in several test facilities in the SRS Heat Transfer Laboratory (HTL). Discrepancies were found. The present report documents further work done to address the discrepancy in airflow measurements between the AFD facility and HTL facilities. The primary purpose of this report is to disseminate the data from the U-AFD test, and to compare these test results to the I-AFD data and the U-AT data. A summary table of the test data and the B&W data transmittal letter are included as an attachment to this report. The full data transmittal volume from B&W (including time plots of the various instruments) is included as an appendix to this report. These data are further analyzed by comparing them to two other HTL tests, namely, SPRIHTE 1 and the Single Assembly Test Stand (SATS).

  10. ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking

    NASA Astrophysics Data System (ADS)

    Schovancová, Jaroslava

    2012-12-01

    This paper details a variety of Monitoring tools used within ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the CERN Analysis Facility after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centres distributed worldwide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfer throughput, through data processing and readiness of the computing services at the ATLAS computing centres, to the reliability and usability of the ATLAS computing centres. The described tools provide monitoring for issues of varying levels of criticality: from identifying issues with the instant online monitoring to long-term accounting information.

  11. Distributed Design and Analysis of Computer Experiments

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  12. Dedicated EPROM-based computer for distributed instrumentation control

    SciTech Connect

    Hunt, D.N.; O'Brien, D.W.

    1981-10-14

    The LLNL Nuclear Chemistry Counting Facility (NCCF) is being converted to a modern production facility. A computer network has been designed and built to implement this conversion. The outermost node of the computer network is a dedicated EPROM-based controller. The controller handles the details of driving the attached nuclear instrumentation, providing a standard interface to the remainder of the network. This paper addresses the design and the implementation of the dedicated instrumentation controller.

  13. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  14. Concept for a distributed processor computer

    NASA Technical Reports Server (NTRS)

    Bogue, P. N.; Burnett, G. J.; Koczela, L. J.

    1970-01-01

    Future generation computer utilizes cell of single metal oxide semiconductor wafer containing general purpose processor section and small memory of approximately 512 words of 16 bits each. Cells are organized into groups and groups interconnected to form computer.

  15. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  16. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  17. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  18. Designing a model to minimize inequities in hemodialysis facilities distribution.

    PubMed

    Salgado, Teresa M; Moles, Rebekah; Benrimoj, Shalom I; Fernandez-Llimos, Fernando

    2011-11-01

    Portugal has an uneven, city-centered bias in the distribution of hemodialysis centers found to contribute to health care inequities. A model has been developed with the aim of minimizing access inequity through the identification of the best possible localization of new hemodialysis facilities. The model was designed under the assumption that individuals from different geographic areas, ceteris paribus, present the same likelihood of requiring hemodialysis in the future. Distances to reach the closest hemodialysis facility were calculated for every municipality lacking one. Regions were scored by aggregating weights of the "individual burden", defined as the burden for an individual living in a region lacking a hemodialysis center to reach one as often as needed, and the "population burden", defined as the burden for the total population living in such a region. The model revealed that the average travelling distance for inhabitants in municipalities without a hemodialysis center is 32 km and that 145,551 inhabitants (1.5%) live more than 60 min away from a hemodialysis center, while 1,393,770 (13.8%) live 30-60 min away. Multivariate analysis showed that the current localization of hemodialysis facilities is associated with major urban areas. The model developed recommends 12 locations for establishing hemodialysis centers that would result in drastically reduced travel for 34 other municipalities, leaving only six (34,800 people) with over 60 min of travel. The application of this model should facilitate the planning of future hemodialysis services as it takes into consideration the potential impact of travel time for individuals in need of dialysis, as well as the logistic arrangements required to transport all patients with end-stage renal disease. The model is applicable in any country and health care planners can opt to weigh these two elements differently in the model according to their priorities. PMID:22109858

  19. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  20. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  1. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  2. SynapSense Wireless Environmental Monitoring System of the RHIC & ATLAS Computing Facility at BNL

    NASA Astrophysics Data System (ADS)

    Casella, K.; Garcia, E.; Hogue, R.; Hollowell, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), the BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated in 1990 and grew steadily up to the present configuration with 4 physically isolated IT areas with the maximum rack capacity of about 1000 racks and the total peak power consumption of 1.5 MW. In June 2012 a project was initiated with the primary goal to replace several environmental monitoring systems deployed earlier within RACF with a single commercial hardware and software solution by SynapSense Corporation based on wireless sensor groups and proprietary SynapSense™ MapSense™ software that offers a unified solution for monitoring the temperature and humidity within the rack/CRAC units as well as pressure distribution underneath the raised floor across the entire facility. The deployment was completed successfully in 2013. The new system also supports a set of additional features such as capacity planning based on measurements of total heat load, power consumption monitoring and control, CRAC unit power consumption optimization based on feedback from the temperature measurements and overall power usage efficiency estimations that are not currently implemented within RACF but may be deployed in the future.

  3. Improvement of the Computing - Related Procurement Process at a Government Research Facility

    SciTech Connect

    Gittins, C.

    2000-04-03

    The purpose of the project was to develop, implement, and market value-added services through the Computing Resource Center in an effort to streamline computing-related procurement processes across the Lawrence Livermore National Laboratory (LLNL). The power of the project was in focusing attention on and value of centralizing the delivery of computer related products and services to the institution. The project required a plan and marketing strategy that would drive attention to the facility's value-added offerings and services. A significant outcome of the project has been the change in the CRC internal organization. The realignment of internal policies and practices, together with additions to its product and service offerings has brought an increased focus to the facility. This movement from a small, fractious organization into one that is still small yet well organized and focused on its mission and goals has been a significant transition. Indicative of this turnaround was the sharing of information. One-on-one and small group meetings, together with statistics showing work activity was invaluable in gaining support for more equitable workload distribution, and the removal of blame and finger pointing. Sharing monthly reports on sales and operating costs also had a positive impact.

  4. Space power distribution system technology. Volume 3: Test facility design

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Cannady, M. D.; Cassinelli, J. E.; Farber, B. F.; Lurie, C.; Fleck, G. W.; Lepisto, J. W.; Messner, A.; Ritterman, P. F.

    1983-01-01

    The AMPS test facility is a major tool in the attainment of more economical space power. The ultimate goals of the test facility, its primary functional requirements and conceptual design, and the major equipment it contains are discussed.

  5. ACTORS: A model of concurrent computation in distributed systems

    SciTech Connect

    Agha, G.

    1986-01-01

    The transition from sequential to parallel computation is an area of critical concern in today's computer technology, particularly in architecture, programming languages, systems, and artificial intelligence. This book addresses issues in concurrency, and by producing both a syntactic definition and a denotational model of Hewitt's actor paradigm - a model of computation specifically aimed at constructing and analyzing distributed large-scale parallel systems - it advances the understanding of parallel computation.

  6. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  7. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  8. A distributed data acquisition system for aeronautics test facilities

    NASA Technical Reports Server (NTRS)

    Fronek, Dennis L.; Setter, Robert N.; Blumenthal, Philip Z.; Smalley, Robert R.

    1987-01-01

    The NASA Lewis Research Center is in the process of installing a new data acquisition and display system. This new system will provide small and medium sized aeronautics test facilities with a state-of-the-art real-time data acquisition and display system. The new data system will provide for the acquisition of signals from a variety of instrumentation sources. They include analog measurements of temperatures, pressures, and other steady state voltage inputs; frequency inputs to measure speed and flow; discrete I/O for significant events, and modular instrument systems such as multiplexed pressure modules or electronic instrumentation with a IEEE 488 interface. The data system is designed to acquire data, convert it to engineering units, compute test dependent performance calculations, limit check selected channels or calculations, and display the information in alphanumeric or graphical form with a cycle time of one second for the alphanumeric data. This paper describes the system configuration, its salient features, and the expected impact on testing.

  9. Distributing digital video to multiple computers

    PubMed Central

    Murray, James A.

    2004-01-01

    Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464

  10. SETI@home, BOINC, and Volunteer Distributed Computing

    NASA Astrophysics Data System (ADS)

    Korpela, Eric J.

    2012-05-01

    Volunteer computing, also known as public-resource computing, is a form of distributed computing that relies on members of the public donating the processing power, Internet connection, and storage capabilities of their home computers. Projects that utilize this mode of distributed computation can potentially access millions of Internet-attached central processing units (CPUs) that provide PFLOPS (thousands of trillions of floating-point operations per second) of processing power. In addition, these projects can access the talents of the volunteers themselves. Projects span a wide variety of domains including astronomy, biochemistry, climatology, physics, and mathematics. This review provides an introduction to volunteer computing and some of the difficulties involved in its implementation. I describe the dominant infrastructure for volunteer computing in some depth and provide descriptions of a small number of projects as an illustration of the variety of projects that can be undertaken.

  11. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  12. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  13. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  14. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  15. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  16. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  17. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  18. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  19. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  20. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  1. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  2. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-01

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693

  3. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  4. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  5. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  6. Design and implementation of a UNIX based distributed computing system

    SciTech Connect

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have to be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.

  7. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  8. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  9. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  10. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  11. Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges

    NASA Technical Reports Server (NTRS)

    Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.

    2000-01-01

    This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.

  12. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Devesas Campos, M.; Tarragon Cros, J.; Gaidioz, B.; Karavakis, E.; Kokoszkiewicz, L.; Lanciotti, E.; Maier, G.; Ollivier, W.; Nowotka, M.; Rocha, R.; Sadykov, T.; Saiz, P.; Sargsyan, L.; Sidorova, I.; Tuckett, D.

    2011-12-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  13. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  14. Optimized distributed computing environment for mask data preparation

    NASA Astrophysics Data System (ADS)

    Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung

    2005-11-01

    As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.

  15. An optimization model for energy generation and distribution in a dynamic facility

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1981-01-01

    An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.

  16. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  17. Nonlinear structural analysis on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.

  18. Data distribution in the NBS Automated Manufacturing Research Facility

    NASA Technical Reports Server (NTRS)

    Mitchell, M. J.; Barkmeyer, E. J.

    1984-01-01

    The Automated Manufacturing Research Facility (AMRF) at the National Bureau of Standards was constructed as a testbed for research in the automation of small batch maufacturing, in particular for systems producing machined parts in lots of 1000 or less. Potential standard interfaces between existing and future components of small batch of factory floor metrology in an automated environment, delivering proven measurement techniques and standard reference materails industry to are identified. Commercially available product are used to construct the facility to expedite transfer of research results into the private sector.

  19. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  20. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  1. Computation of glint, glare, and solar irradiance distribution

    SciTech Connect

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  2. Status Of The National Ignition Campaign And National Ignition Facility Integrated Computer Control System

    SciTech Connect

    Lagin, L; Brunton, G; Carey, R; Demaret, R; Fisher, J; Fishler, B; Ludwigsen, P; Marshall, C; Reed, R; Shelton, R; Townsend, S

    2011-03-18

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility that will contains a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn. NIF is operated by the Integrated Computer Control System (ICCS) in an object-oriented, CORBA-based system distributed among over 1800 frontend processors, embedded controllers and supervisory servers. In the fall of 2010, a set of experiments began with deuterium and tritium filled targets as part of the National Ignition Campaign (NIC). At present, all 192 laser beams routinely fire to target chamber center to conduct fusion and high energy density experiments. During the past year, the control system was expanded to include automation of cryogenic target system and over 20 diagnostic systems to support fusion experiments were deployed and utilized in experiments in the past year. This talk discusses the current status of the NIC and the plan for controls and information systems to support these experiments on the path to ignition.

  3. Distributed interactive graphics applications in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Buning, Pieter G.; Merritt, Fergus J.

    1988-01-01

    Implementation of two interactive, distributed graphics programs used in Computational Fluid Dynamics is discussed. Both programs run on a Cray 2 supercomputer and use a Silicon Graphics Iris workstation as the graphics front-end machine. The hardware and supporting software is from the Numerical Aerodynamic Simulation project. Using this configuration, the supercomputer does all of the numerically intensive work and the workstation allows the user to perform real-time interactive transformations on the displayed data. The first program was written originally as a distributed program which computes particle traces for fluid flow solutions existing on the supercomputer. The second is an older post-processing and plotting program which was modified to run in a distributed mode. Both programs have realized a large increase in capability as a distributed process. Some graphical results are presented.

  4. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  5. Have computers, will travel: providing on-site library instruction in rural health facilities using a portable computer lab.

    PubMed

    Neilson, Christine J

    2010-01-01

    The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.

  6. Effects of wind-energy facilities on grassland bird distributions

    USGS Publications Warehouse

    Shaffer, Jill A.; Buhl, Deb

    2016-01-01

    The contribution of renewable energy to meet worldwide demand continues to grow. Wind energy is one of the fastest growing renewable sectors, but new wind facilities are often placed in prime wildlife habitat. Long-term studies that incorporate a rigorous statistical design to evaluate the effects of wind facilities on wildlife are rare. We conducted a before-after-control-impact (BACI) assessment to determine if wind facilities placed in native mixed-grass prairies displaced breeding grassland birds. During 2003–2012, we monitored changes in bird density in 3 study areas in North Dakota and South Dakota (U.S.A.). We examined whether displacement or attraction occurred 1 year after construction (immediate effect) and the average displacement or attraction 2–5 years after construction (delayed effect). We tested for these effects overall and within distance bands of 100, 200, 300, and >300 m from turbines. We observed displacement for 7 of 9 species. One species was unaffected by wind facilities and one species exhibited attraction. Displacement and attraction generally occurred within 100 m and often extended up to 300 m. In a few instances, displacement extended beyond 300 m. Displacement and attraction occurred 1 year after construction and persisted at least 5 years. Our research provides a framework for applying a BACI design to displacement studies and highlights the erroneous conclusions that can be made without the benefit of adopting such a design. More broadly, species-specific behaviors can be used to inform management decisions about turbine placement and the potential impact to individual species. Additionally, the avoidance distance metrics we estimated can facilitate future development of models evaluating impacts of wind facilities under differing land-use scenarios.

  7. High speed, wide area distributed computing for scientific imaging

    SciTech Connect

    Johnston, W.E.; Jacobson, V.L.; Loken, S.C.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into ``virtual systems`` that are optimal for the solution of a particular problem. We also present an experiment that illustrates some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  8. High speed, wide area distributed computing for scientific imaging

    SciTech Connect

    Johnston, W.E.; Jacobson, V.L.; Loken, S.C.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into virtual systems'' that are optimal for the solution of a particular problem. We also present an experiment that illustrates some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  9. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  10. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  11. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... the Federal Register of October 29, 2007 (72 FR 61171), FDA announced the availability of the draft... HUMAN SERVICES Food and Drug Administration Guidance for Industry: Blood Establishment Computer System... ``Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility'' dated...

  12. NNS computing facility manual P-17 Neutron and Nuclear Science

    SciTech Connect

    Hoeberling, M.; Nelson, R.O.

    1993-11-01

    This document describes basic policies and provides information and examples on using the computing resources provided by P-17, the Neutron and Nuclear Science (NNS) group. Information on user accounts, getting help, network access, electronic mail, disk drives, tape drives, printers, batch processing software, XSYS hints, PC networking hints, and Mac networking hints is given.

  13. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  14. MPWide: Light-weight communication library for distributed computing

    NASA Astrophysics Data System (ADS)

    Groen, Derek; Rieder, Steven; Grosso, Paola; de Laat, Cees; Portegies Zwart, Simon

    2012-12-01

    MPWide is a light-weight communication library for distributed computing. It is specifically developed to allow message passing over long-distance networks using path-specific optimizations. An early version of MPWide was used in the Gravitational Billion Body Project to allow simulations across multiple supercomputers.

  15. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  16. SAGA: A standardized access layer to heterogeneous Distributed Computing Infrastructure

    NASA Astrophysics Data System (ADS)

    Merzky, Andre; Weidner, Ole; Jha, Shantenu

    2015-09-01

    Distributed Computing Infrastructure is characterized by interfaces that are heterogeneous-syntactically and semantically. SAGA represents the most comprehensive community effort to date to address the heterogeneity by defining a simple, uniform access layer. In this paper, we describe the basic concepts underpinning its design and development. We also discuss RADICAL-SAGA which is the most widely used implementation of SAGA.

  17. Distributed Educational Influence and Computer-Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Coll, César; Bustos, Alfonso; Engel, Anna; de Gispert, Inés; Rochera, María José

    2013-01-01

    This article introduces a line of research on distributed educational influence (DEI) that has recently been developed by the research group to which the authors belong. The main hypothesis is that in computer-supported collaborative learning contexts, all participants are potential sources of educational influence (EI). According to this…

  18. EFFECTS OF MIXING AND AGING ON WATER QUALITY IN DISTRIBUTION SYSTEM STORAGE FACILITIES

    EPA Science Inventory

    Aging of water in distribution system storage facilities can lead to deterioration of the water quality due to loss of disinfectant residual and bacterial regrowth. Facilities should be operated to insure that the age of the water is not excessive taking into account the quality...

  19. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  20. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    SciTech Connect

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as

  1. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGESBeta

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; et al

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system

  2. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We

  3. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  4. Performance Assessment of OVERFLOW on Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  5. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  6. A Comprehensive Computer Package for Ambulatory Surgical Facilities

    PubMed Central

    Kessler, Robert R.

    1980-01-01

    Ambulatory surgical centers are a cost effective alternative to hospital surgery. Their increasing popularity has contributed to heavy case loads, an accumulation of vast amounts of medical and financial data and economic pressures to maintain a tight control over “cash flow”. Computerization is now a necessity to aid ambulatory surgical centers to maintain their competitive edge. An on-line system is especially necessary as it allows interactive scheduling of surgical cases, immediate access to financial data and rapid gathering of medical and statistical information. This paper describes the significant features of the computer package in use at the Salt Lake Surgical Center, which processes 500 cases per month.

  7. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling.

  8. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  9. Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

    NASA Astrophysics Data System (ADS)

    Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration

    2014-06-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  10. Computed voltage distributions around solar electric propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    The NASA Charging Analyzer Program is used to conduct preliminary computations of the voltage distributions around such large spacecraft in geomagnetic substorm environments at geosynchronous altitudes. Both a standard operating voltage (+ or - 150 volts on solar arrays) and direct-drive (+1200 volts on arrays) configurations are considered. Thruster-off simulations are computed for both operating voltage configurations while the effect of simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster operations appear to alleviate surface charging.

  11. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    SciTech Connect

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  12. Distributed storage and cloud computing: a test case

    NASA Astrophysics Data System (ADS)

    Piano, S.; Delia Ricca, G.

    2014-06-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  13. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  14. Computer simulation on reconstruction of 3-D flame temperature distribution

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Yung, K. L.; Wu, Z.; Li, T.

    To measure non-symmetric unsteady three dimensional temperature distribution in flame by simple, economic, fast and accurate means, and to apply a priori information to the measurement both sufficiently and efficiently, we conducted computer simulations. Simulation results proved that finite series-expansion reconstruction method is more suitable for measurement of temperature distribution in flame than transform method which is widely used in medical scanning and nondestructive testing. By comparing errors of simulations with different numbers of views, different domain shapes, different numbers of projections per view, different angles of views and different grid shapes, etc., we find that circle domain, triangular grid and sufficient number of projections per view, can improve the accuracy in the reconstruction of 3-D temperature distribution with limited views. With six views, errors caused by reconstruction computation are reduced, they are smaller than those caused by measurement. Therefore, a comparatively better means of measuring 3-D temperature distribution in flame with limited projection views by emission tomography is achieved. Experimental results also showed that the method we used was appropriate for measurement of 3-D temperature distribution with limited number of views [1].

  15. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  16. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  17. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  18. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  19. Semiquantum key distribution with secure delegated quantum computation.

    PubMed

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  20. Semiquantum key distribution with secure delegated quantum computation

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.

  1. Computer/information security design approaches for Complex 21/Reconfiguration facilities

    SciTech Connect

    Hunteman, W.J.; Zack, N.R.; Jaeger, C.D.

    1993-08-01

    Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.

  2. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Multiple award schedule... Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply distribution... items. Stocking a variety of commercial, high-demand items purchased from FSS multiple award...

  3. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2016-07-12

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  4. LBNL Computational Research & Theory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2016-07-12

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  5. LBNL Computational Research & Theory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    SciTech Connect

    Yelick, Kathy

    2012-01-01

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  6. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    SciTech Connect

    Yelick, Kathy

    2012-01-01

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  7. Verification and translation of distributed computing system software design

    SciTech Connect

    Chen, J.N.

    1987-01-01

    A methodology for generating a distributed computing system application program for the design specification based on modified Petri nets is presented. There are four major stages in this methodology: (1) to build a structured graphics specification model, (2) to verify abstract data type and detect deadlock of the model, (3) the define communicate among individual processes within the model, and (4) to translate symbolic representation into a program of a specified high-level target language. In this dissertation, Ada is used as the specified high-level target language. The structured graphics promote intelligibility because hierarchical decomposition functional modules is encouraged and the behavior of each process can be easily extracted from the net as a separate view of the system. The formal method described in this dissertation uses symbolic formal method presentation to represent the design specification of distributed computing systems. This symbolic representation is then translated into an equivalent Ada program structure, especially with the features of concurrency and synchronization. Artificial intelligence techniques are employed to verify and to detect deadlock properties in a distributed computing system environment. In the aspect of verification, the axioms of abstract data types are translated into PROLOG clauses and some inquires are tested to prove correctness of abstract data types.

  8. A compositional reservoir simulator on distributed memory parallel computers

    SciTech Connect

    Rame, M.; Delshad, M.

    1995-12-31

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.

  9. Multi-VO support in IHEP's distributed computing environment

    NASA Astrophysics Data System (ADS)

    Yan, T.; Suo, B.; Zhao, X. H.; Zhang, X. M.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting.

  10. Distributing Data from Desktop to Hand-Held Computers

    NASA Technical Reports Server (NTRS)

    Elmore, Jason L.

    2005-01-01

    A system of server and client software formats and redistributes data from commercially available desktop to commercially available hand-held computers via both wired and wireless networks. This software is an inexpensive means of enabling engineers and technicians to gain access to current sensor data while working in locations in which such data would otherwise be inaccessible. The sensor data are first gathered by a data-acquisition server computer, then transmitted via a wired network to a data-distribution computer that executes the server portion of the present software. Data in all sensor channels -- both raw sensor outputs in millivolt units and results of conversion to engineering units -- are made available for distribution. Selected subsets of the data are transmitted to each hand-held computer via the wired and then a wireless network. The selection of the subsets and the choice of the sequences and formats for displaying the data is made by means of a user interface generated by the client portion of the software. The data displayed on the screens of hand-held units can be updated at rates from 1 to

  11. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  12. Lightweight distributed computing for intraoperative real-time image guidance

    NASA Astrophysics Data System (ADS)

    Suwelack, Stefan; Katic, Darko; Wagner, Simon; Spengler, Patrick; Bodenstedt, Sebastian; Röhl, Sebastian; Dillmann, Rüdiger; Speidel, Stefanie

    2012-02-01

    In order to provide real-time intraoperative guidance, computer assisted surgery (CAS) systems often rely on computationally expensive algorithms. The real-time constraint is especially challenging if several components such as intraoperative image processing, soft tissue registration or context aware visualization are combined in a single system. In this paper, we present a lightweight approach to distribute the workload over several workstations based on the OpenIGTLink protocol. We use XML-based message passing for remote procedure calls and native types for transferring data such as images, meshes or point coordinates. Two different, but typical scenarios are considered in order to evaluate the performance of the new system. First, we analyze a real-time soft tissue registration algorithm based on a finite element (FE) model. Here, we use the proposed approach to distribute the computational workload between a primary workstation that handles sensor data processing and visualization and a dedicated workstation that runs the real-time FE algorithm. We show that the additional overhead that is introduced by the technique is small compared to the total execution time. Furthermore, the approach is used to speed up a context aware augmented reality based navigation system for dental implant surgery. In this scenario, the additional delay for running the computationally expensive reasoning server on a separate workstation is less than a millisecond. The results show that the presented approach is a promising strategy to speed up real-time CAS systems.

  13. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2010-01-01

    levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma

  14. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  15. Successful initiation of and management through a distributed computer upgrade

    SciTech Connect

    Barich, F.T.; Crawford, T.H.

    1995-12-31

    Processing capacity, the lack of data analysis tools, obsolescence, and spare parts issues are forcing utilities to upgrade or replace their plant computer systems with newer, larger systems. As a result, the utility faces an increasing number of new technologies, such as fiber optics and communication standards (FDDI, ATM, etc.), Graphic User Interface using X-Windows, and distributed architectures that eliminate the host based computer. Technologies such as these, if properly applied, can greatly enhance the capabilities and functions of the existing system. Besides this, the utility also faces funtionality previously not available through the plant computer, such as integrated plant monitoring and digital controls, voice, imaging, etc. With computing technology vastly changing from traditional host systems, the utility confronts the question, {open_quotes}what are my needs (now and for the future), and what new system can meet those needs most effectively?{close_quotes}. This paper describes the management process necessary to define the needs and then carry out a successful computer replacement project.

  16. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  17. A digital computer propulsion control facility: Description of capabilities and summary of experimental program results

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Arpasi, D. J.; Lehtinen, B.

    1976-01-01

    Flight weight digital computers are being used today to carry out many of the propulsion system control functions previously delegated exclusively to hydromechanical controllers. An operational digital computer facility for propulsion control mode studies has been used successfully in several experimental programs. This paper describes the system and some of the results concerned with engine control, inlet control, and inlet engine integrated control. Analytical designs for the digital propulsion control modes include both classical and modern/optimal techniques.

  18. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    NASA Astrophysics Data System (ADS)

    du Plessis, Anton; le Roux, Stephan Gerhard; Guelpa, Anina

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory's first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  19. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  20. Computer Use and CAD in Assisting Schools in the Creation of Facilities.

    ERIC Educational Resources Information Center

    Beach, Robert H.; Essex, Nathan

    1987-01-01

    Computer-aided design (CAD) programs are powerful drafting tools, but are also able to assist with many other facility planning functions. Describes the hardware, software, and the learning process that led to understanding the CAD software at the University of Alabama. (MLF)

  1. 300 Area Treated Effluent Disposal Facility computer software release cover sheet and revision record

    SciTech Connect

    McCarthy, R.J.

    1994-11-28

    This supporting document contains the computer software release cover sheet and revision records for the 300 Area Treated Effluent Disposal Facility (TEDF). The previous revision was controlled by CH2M Hill which developed the software. A 7-page listing of the contents of directory C:{backslash}TEDF is contained in this report.

  2. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    Software tools like MELTS (Ghiorso and Sack, 1995, CMP 119:197) and its derivatives (Ghiorso et al., 2002, G3 3:10.1029/2001GC000217) are sophisticated calculators used by geoscientists to quantify the chemistry of melt production, transport and storage. These tools utilize computational thermodynamics to evaluate the equilibrium state of the system under specified external conditions by minimizing a suitably constructed thermodynamic potential. Like any thermodynamically based tool, the principal advantage in employing these techniques to model igneous processes is the intrinsic ability to couple the chemistry and energetics of the evolution of the system in a self consistent and rigorous formalism. Access to MELTS is normally accomplished via a standalone X11-based executable or as a Java-based web applet. The latter is a dedicated client-server application rooted at the University of Chicago. Our on-going objective is the development of a distributed computing infrastructure to provide "MELTS-like" computations on demand to remote network users by utilizing a language independent client-server protocol based on CORBA. The advantages of this model are numerous. First, the burden of implementing and executing MELTS computations is centralized with a software implementation optimized to a compute cluster dedicated for that purpose. Improvements and updates to MELTS software are handled locally on the server side without intervention of the user and the server-model lessens the burden of supporting the computational code on a variety of hardware and OS platforms. Second, the client hardware platform does not incur the computational cost of performing a MELTS simulation and the remote user can focus on the task of incorporating results into their model. Third, the client user can write software in a computer language of their choosing and procedural calls to the MELTS library can be executed transparently over the network as if a local language-compatible library of

  3. Design and implementation of a support platform for distributed mobile computing

    NASA Astrophysics Data System (ADS)

    Schill, A.; Kummel, S.

    1995-09-01

    With the rapid development of mobile computer systems and mobile communication infrastructures, a broad field of distributed mobile computing is enabled. The paper first discusses these developments in closer detail and summarizes the resulting requirements concerning adequate software support. An application scenario of the service engineering area illustrates specific aspects including bandwidth and location management, dynamic configuration, resource heterogeneity, disconnection, and security. Based on these considerations, a generic software support platform for distributed mobile computing is derived. It addresses several of these aspects by providing application-independent and reusable support services. In particular, it offers a framework for organizing distributed mobile applications into manageable domains, it equips mobile stations with enhanced functionality for location, resource and bandwidth management, and it uses industry standard RPC communication facilities for enhanced portability. The design, implementation and use of the support platform is illustrated based on a specific part of the application, a mobile multimedia e-mail system. Experiences and implementation aspects in this context are particularly emphasized.

  4. Parallel matrix transpose algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, J.; Walker, D.W.; Dongarra, J.J. |

    1993-10-01

    This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.

  5. Facility management of computer-aided design, drafting/manufacturing systems (CADD/M)

    SciTech Connect

    Norton, F.J.

    1980-09-23

    Interactive Computer-Aided Design Drafting/Manufacturing systems have been installed in thousands of companies, applying CADD/M capabilities to many applications. This has been done with varying degrees of success even among companies with identical applications. Investigation of individual companies reveals a gap between the capabilities of CADD/M systems and the actual usage by industry of those capabilities. This company usage often determines the degree of success or failure of an interactive graphics facility and is largely controlled by management. The responsibilities of the interactive graphics facility managemant team are explained in detail. Proper management of a CADD/M facility is more critical to the success or failure of the facility than any other factor.

  6. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  7. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.

  8. Distributed Computing and MEMS Accelerometers: The Quake Catcher Network

    NASA Astrophysics Data System (ADS)

    Lawrence, J. F.; Cochran, E. S.; Christensen, C.; Jakka, R. S.

    2008-12-01

    Recent advances in distributed computing provide exciting opportunities for seismic data collection. We are in the early stages of implementing a high density, low cost strong-motion network for rapid response and early warning by placing accelerometers in schools, homes, offices, government buildings, fire houses and more. The Quake Catcher Network (QCN) employs existing networked laptops and desktops to form a dense, distributed computing seismic network. Costs for this network are minimal because the QCN uses 1) strong motion sensors (accelerometers) already internal to many laptops and 2) low-cost universal serial bus (USB) accelerometers for use with desktops. The Berkeley Open Infrastructure for Network Computing (BOINC!) provides a free, proven paradigm for involving the public in large-scale computational research projects. The QCN leverages public participation to fully implement the seismic network. As such engaging the public to participate in seismic data collection is not only an integral part of the project, but an added value to the QCN. The software provides the client-user with a screen-saver displaying seismic data recorded on their laptop or recently detected earthquakes. Furthermore, this project installs sensors in K-12 classrooms as an educational tool for teaching science. Through a variety of interactive experiments students can learn about earthquakes and the hazards earthquakes pose. In the first six months of limited release of the QCN software, we successfully received triggers and waveforms from laptops near the M 4.7 April 25, 2008 earthquake in Reno, Nevada and the M 5.4 July 29, 2008 earthquake in Chino, California (as well as a few 3.6 and higher events). This fall we continued to expand the network further by installing seismometers in K-12 schools, museums, and government buildings in the greater Los Angeles basin and the San Francisco Bay Area. By summer 2009 we expect to have 1000 USB sensors deployed in California, in addition

  9. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  10. Job monitoring on DIRAC for Belle II distributed computing

    NASA Astrophysics Data System (ADS)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  11. Repeat-until-success linear optics distributed quantum computing.

    PubMed

    Lim, Yuan Liang; Beige, Almut; Kwek, Leong Chuan

    2005-07-15

    We demonstrate the possibility to perform distributed quantum computing using only single-photon sources (atom-cavity-like systems), linear optics, and photon detectors. The qubits are encoded in stable ground states of the sources. To implement a universal two-qubit gate, two photons should be generated simultaneously and pass through a linear optics network, where a measurement is performed on them. Gate operations can be repeated until a success is heralded without destroying the qubits at any stage of the operation. In contrast with other schemes, this does not require explicit qubit-qubit interactions, a priori entangled ancillas, nor the feeding of photons into photon sources.

  12. A distributed computing tool for generating neural simulation databases.

    PubMed

    Calin-Jageman, Robert J; Katz, Paul S

    2006-12-01

    After developing a model neuron or network, it is important to systematically explore its behavior across a wide range of parameter values or experimental conditions, or both. However, compiling a very large set of simulation runs is challenging because it typically requires both access to and expertise with high-performance computing facilities. To lower the barrier for large-scale model analysis, we have developed NeuronPM, a client/server application that creates a "screen-saver" cluster for running simulations in NEURON (Hines & Carnevale, 1997). NeuronPM provides a user-friendly way to use existing computing resources to catalog the performance of a neural simulation across a wide range of parameter values and experimental conditions. The NeuronPM client is a Windows-based screen saver, and the NeuronPM server can be hosted on any Apache/PHP/MySQL server. During idle time, the client retrieves model files and work assignments from the server, invokes NEURON to run the simulation, and returns results to the server. Administrative panels make it simple to upload model files, define the parameters and conditions to vary, and then monitor client status and work progress. NeuronPM is open-source freeware and is available for download at http://neuronpm.homeip.net . It is a useful entry-level tool for systematically analyzing complex neuron and network simulations.

  13. Feasibility Study for a Remote Terminal Central Computing Facility Serving School and College Institutions. Volume I, Functional Requirements.

    ERIC Educational Resources Information Center

    International Business Machines Corp., White Plains, NY.

    The economic and technical feasibility of providing a remote terminal central computing facility to serve a group of 25-75 secondary schools and colleges was investigated. The general functions of a central facility for an educational cluster were defined to include training in computer techniques, the solution of student development problems in…

  14. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  15. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  16. A Fruitful Collaboration between ESO and the Max Planck Computing and Data Facility

    NASA Astrophysics Data System (ADS)

    Fourniol, N.; Zampieri, S.; Panea, M.

    2016-06-01

    The ESO Science Archive Facility (SAF), contains all La Silla Paranal Observatory raw data, as well as, more recently introduced, processed data created at ESO with state-of-the-art pipelines or returned by the astronomical community. The SAF has been established for over 20 years and its current holding exceeds 700 terabytes. An overview of the content of the SAF and the preservation of its content is provided. The latest development to ensure the preservation of the SAF data, provision of an independent backup copy of the whole SAF at the Max Planck Computing and Data Facility in Garching, is described.

  17. Net-Faim: distributed computation of aerial images

    NASA Astrophysics Data System (ADS)

    Hollerbach, Uwe

    1998-06-01

    Simulation of aerial images is an important part of modern microchip manufacturing, but computation of the image of an entire mask is a challenging problem requiring a large amount of memory and CPU time. Fortunately, it is possible to decompose the large problem of computing the full image into many smaller, mostly independent, sub-problems. In this paper, one particular decomposition is described and implemented. The target platform is a heterogeneous group of networked workstations. The program, net-faim, was designed to be robust, to scale well with available resources, and to place modest demands on participating workstations. All of these design criteria have been realized. The overall performance of the distributed computation is linearly proportional to the sum of the performances of the individual processors, up to a rather high level of parallelism. Robustness is achieved by not relying on any one server to complete a given task; instead, if an idle server is available, the task is sent out to the idle server even if it has previously been sent to another server. The task is only retired when a server returns the completed answer. This 'paranoid' method of processing tasks has the pleasant side effect of doing automatic dynamic load balancing. The results of runs with several different configurations, both of participating workstations and of sub- domain sizes, are displayed.

  18. GAiN: Distributed Array Computation with Python

    SciTech Connect

    Daily, Jeffrey A.

    2009-05-01

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  19. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  20. Increased number of Judo therapy facilities in Japan and changes in their geographical distribution

    PubMed Central

    2011-01-01

    Background Judo therapy is a well established Japanese co-medical profession specializing in outpatient manual treatment of fractures and sprains. Recently, the number of judo therapists has been rapidly increasing as a result of proliferation judo therapy academies. This study examines whether such rapid increases have improved geographical distribution of judo therapy facilities in Japan. Methods The number of judo therapy facilities and the population in each municipality were obtained from the Web yellow pages and from Japanese census data for 2004, 2006, and 2008, respectively. Lorenz curves and Gini indices were calculated to demonstrate distributions of judo therapy facilities per 100,000 people. A bootstrapped method was used to identify statistical significances of differences in Gini indices. Results In all municipalities, the mean numbers of judo therapy facilities per 100,000 people were 15.3 in 2004, 15.8 in 2006, and 17.6 in 2008. The Gini indices for judo therapy facilities nationally were 0.273 in 2004, 0.264 in 2006, and 0.264 in 2008. The numbers of judo therapy facilities increased significantly between 2006 and 2008 (p < 0.05) but the indices did not change significantly in the same period. The Gini indices for local towns and villages remained unchanged and were consistently higher (p < 0.05) than those in urban areas throughout the study periods. Conclusion Our results suggest that recent increases in the number of judo therapy facilities have not necessarily led to greater equality in their geographic distribution in terms of Gini indices. PMID:21352604

  1. Academic Research Equipment in Computer Science, Central Computer Facilities and Engineering: 1989.

    ERIC Educational Resources Information Center

    Westat, Inc., Rockville, MD.

    This monograph is one in a series of analytical reports presenting findings from the National Science Foundation's 1989-90 National Survey of Academic Research Instruments and Instrumentation Needs. It describes recent national trends in academic research equipment and equipment needs in the fields of computer science and engineering. It also…

  2. A facile synthesis of Te nanoparticles with binary size distribution by green chemistry.

    PubMed

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E; Dickerson, James H

    2011-04-01

    Our work reports a facile route to colloidal Te nanocrystals with binary uniform size distributions at room temperature. The binary-sized Te nanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated. PMID:21431193

  3. Spatio-temporal distribution of stored-product inects around food processing and storage facilities

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Grain storage and processing facilities consist of a landscape of indoor and outdoor habitats that can potentially support stored-product insect pests, and understanding patterns of species diversity and spatial distribution in the landscape surrounding structures can provide insight into how the ou...

  4. A facile synthesis of Te nanoparticles with binary size distribution by green chemistry.

    PubMed

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E; Dickerson, James H

    2011-04-01

    Our work reports a facile route to colloidal Te nanocrystals with binary uniform size distributions at room temperature. The binary-sized Te nanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated.

  5. Distributed computations in a dynamic, heterogeneous Grid environment

    NASA Astrophysics Data System (ADS)

    Dramlitsch, Thomas

    2003-06-01

    In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and

  6. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  7. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  8. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    SciTech Connect

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-05-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities.

  9. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    SciTech Connect

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities.

  10. Power Hardware-in-the-Loop (PHIL) Testing Facility for Distributed Energy Storage (Poster)

    SciTech Connect

    Neubauer.J.; Lundstrom, B.; Simpson, M.; Pratt, A.

    2014-06-01

    The growing deployment of distributed, variable generation and evolving end-user load profiles presents a unique set of challenges to grid operators responsible for providing reliable and high quality electrical service. Mass deployment of distributed energy storage systems (DESS) has the potential to solve many of the associated integration issues while offering reliability and energy security benefits other solutions cannot. However, tools to develop, optimize, and validate DESS control strategies and hardware are in short supply. To fill this gap, NREL has constructed a power hardware-in-the-loop (PHIL) test facility that connects DESS, grid simulator, and load bank hardware to a distribution feeder simulation.

  11. 120. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  12. 118. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  13. 122. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  14. 121. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  15. 117. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  16. 119. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  17. National Ignition Facility computational fluid dynamics modeling and light fixture case studies

    SciTech Connect

    Martin, R.; Bernardin, J.; Parietti, L.; Dennison, B.

    1998-02-01

    This report serves as a guide to the use of computational fluid dynamics (CFD) as a design tool for the National Ignition Facility (NIF) program Title I and Title II design phases at Lawrence Livermore National Laboratory. In particular, this report provides general guidelines on the technical approach to performing and interpreting any and all CFD calculations. In addition, a complete CFD analysis is presented to illustrate these guidelines on a NIF-related thermal problem.

  18. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  19. Enhancing the Reliability of Spectral Correlation Function with Distributed Computing

    NASA Astrophysics Data System (ADS)

    Alfaqawi, M. I.; Chebil, J.; Habaebi, M. H.; Ramli, N.; Mohamad, H.

    2013-12-01

    Various random time series used in signal processing systems are cyclostationary due to the sinusoidal carriers, pulse trains, periodic motion, or physical phenomenon. The cyclostationarity of the signal could be analysed by using the spectral correlation function (SCF). However, SCF is considered high complex due to the 2-D functionality and the required long observation time. The SCF could be computed in various methods however there are two methods used in practice such as FFT accumulation method (FAM) and strip spectral correlation algorithm (SSCA). This paper shows the benefit on the complexity and the reliability due to the workload distribution of one processor over different cooperated processors. The paper found that with increasing the reliability of the SCF, the number of the cooperated processors to achieve the half of the maximum complexity will reduce.

  20. Distributed and multi-core computation of 2-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.

    2014-06-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals.

  1. Evaluation of Secure Computation in a Distributed Healthcare Setting.

    PubMed

    Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki

    2016-01-01

    Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use. PMID:27577361

  2. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  3. Evaluation of Secure Computation in a Distributed Healthcare Setting.

    PubMed

    Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki

    2016-01-01

    Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use.

  4. Toward unification of taxonomy databases in a distributed computer environment

    SciTech Connect

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomy databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.

  5. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  6. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  7. Execution of the SimSET Monte Carlo PET/SPECT simulator in the condor distributed computing environment.

    PubMed

    Baum, Karl G; Helguera, María

    2007-11-01

    SimSET is a package for simulation of emission tomography data sets. Condor is a popular distributed computing environment. Simple C/C++ applications and shell scripts are presented which allow the execution of SimSET on the Condor environment. This is accomplished without any modification to SimSET by executing multiple instances and using its combinebin utility. This enables research facilities without dedicated parallel computing systems to utilize the idle cycles of desktop workstations to greatly reduce the run times of their SimSET simulations. The necessary steps to implement this approach in other environments are presented along with sample results.

  8. Distribution feeder loss computation by artificial neural network

    SciTech Connect

    Kau, S.W.; Cho, M.Y.

    1995-12-31

    This paper proposes an artificial neural network (ANN) based feeder loss calculation model for distribution system analysis. In this paper, the functional-link network model is examined to form the artificial neural network architecture to derive the various loss calculation models for feeders with different configuration. Such artificial neural network is a feedforward network that uses standard back-propagation algorithm to adjust weights on the connection path between any two processing elements (PEs). Feeder daily load curve on each season are derived by field test data. Three-phase load flow program is executed to create the training sets with exact loss calculation results. A sensitivity analysis is executed to determine the key factors included power factor, feeder loading, primary conductors, secondary conductors, and transformer capacity as the variables for components located at input layer. By artificial neural network with the pattern recognition ability, this study has developed seasonal and yearly loss calculation models for overhead and underground feeder configuration. Two practical feeders with both overhead and underground configuration in Taiwan Power Company (TPC or Taipower) distribution system are selected for computer simulation to demonstrate the effectiveness and accuracy of the proposed models. As comparing with models derived by the conventional regression technique, results indicate that the proposed models provide more efficient tool to District engineer for feeder loss calculation.

  9. COMPUTER MODEL OF TEMPERATURE DISTRIBUTION IN OPTICALLY PUMPED LASER RODS

    NASA Technical Reports Server (NTRS)

    Farrukh, U. O.

    1994-01-01

    Managing the thermal energy that accumulates within a solid-state laser material under active pumping is of critical importance in the design of laser systems. Earlier models that calculated the temperature distribution in laser rods were single dimensional and assumed laser rods of infinite length. This program presents a new model which solves the temperature distribution problem for finite dimensional laser rods and calculates both the radial and axial components of temperature distribution in these rods. The modeled rod is either side-pumped or end-pumped by a continuous or a single pulse pump beam. (At the present time, the model cannot handle a multiple pulsed pump source.) The optical axis is assumed to be along the axis of the rod. The program also assumes that it is possible to cool different surfaces of the rod at different rates. The user defines the laser rod material characteristics, determines the types of cooling and pumping to be modeled, and selects the time frame desired via the input file. The program contains several self checking schemes to prevent overwriting memory blocks and to provide simple tracing of information in case of trouble. Output for the program consists of 1) an echo of the input file, 2) diffusion properties, radius and length, and time for each data block, 3) the radial increments from the center of the laser rod to the outer edge of the laser rod, and 4) the axial increments from the front of the laser rod to the other end of the rod. This program was written in Microsoft FORTRAN77 and implemented on a Tandon AT with a 287 math coprocessor. The program can also run on a VAX 750 mini-computer. It has a memory requirement of about 147 KB and was developed in 1989.

  10. Fast layout processing methodologies for scalable distributed computing applications

    NASA Astrophysics Data System (ADS)

    Kang, Chang-woo; Shin, Jae-pil; Durvasula, Bhardwaj; Seo, Sang-won; Jung, Dae-hyun; Lee, Jong-bae; Park, Young-kwan

    2012-06-01

    As the feature size shrinks to sub-20nm, more advanced OPC technologies such as ILT and the new lithographic resolution by EUV become the key solutions for device fabrication. These technologies leads to the file size explosion of up to hundreds of gigabytes of GDSII and OASIS files mainly due to the addition of complicated scattering bars and flattening of the design to compensate for long range effects. Splitting and merging layout files have been done sequentially in typical distributed computing layout applications. This portion becomes the bottle neck, causing the scalability to become poor. According to the Amdahl's law, minimizing the portion of sequential part is the key to get the maximum speed up. In this paper, we present scalable layout dividing and merging methodologies: Skeleton file based querying and direct OASIS file merging. These methods not only use a very minimum memory footprint but also achieve remarkable speed improvement. The skeleton file concept is very novel for a distributed application requiring geometrical processing, as it allows almost pseudo-random access into the input GDSII or OASIS file. Client machines can make use of the random access and perform fast query operations. The skeleton concept also works very well for flat input layouts, which is often the case of post-OPC data. Also, our OASIS file merging scheme is a smart approach which is equivalent of a binary file concatenation scheme. The merging method for OASIS files concatenates shape information in binary format with basic interpretation of bits with very low memory usage. We have observed that the skeleton file concept achieved 13.5 times speed improvement and used only 3.78% of memory on the master, over the conventional concept of converting into an internal format. Also, the merging speed is very fast, 28MB/sec and it is 44.5 times faster than conventional method. On top of the fast merging speed, it is very scalable since the merging time grows in linear fashion

  11. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2008-01-01

    levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma

  12. An environmental testing facility for Space Station Freedom power management and distribution hardware

    NASA Technical Reports Server (NTRS)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-01-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  13. Microwave circuit analysis and design by a massively distributed computing network

    NASA Astrophysics Data System (ADS)

    Vai, Mankuan; Prasad, Sheila

    1995-05-01

    The advances in microelectronic engineering have rendered massively distributed computing networks practical and affordable. This paper describes one application of this distributed computing paradigm to the analysis and design of microwave circuits. A distributed computing network, constructed in the form of a neural network, is developed to automate the operations typically performed on a normalized Smith chart. Examples showing the use of this computing network for impedance matching and stabilizing are provided.

  14. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  15. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    SciTech Connect

    Carlson, R.B.

    1992-05-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software.

  16. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    SciTech Connect

    Carlson, R.B.

    1992-01-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software.

  17. Advances in the archiving and distribution facilities at the Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Postman, Marc; Pollizzi, Joseph; Richon, J.

    1998-07-01

    The Hubble Data Archive at the Space Telescope Science Institute contains over 4.3 TB of data, primarily for the Hubble Space Telescope, but also from complementary space- based and ground-based facilities. We are in the process of upgrading and generalizing many of the HDA's component system, developing tools to provide more integrated access to the HDA holdings, and working with other major data providing organizations to implement global data location services for astronomy and other space science disciplines. This paper describes the key elements of our archiving and data distribution systems, including a planned transition to DVD media, data compression, data segregation, on-the-fly calibration, an engineering data warehouse, and distributed search and retrieval facilities.

  18. A facile synthesis of Tenanoparticles with binary size distribution by green chemistry

    NASA Astrophysics Data System (ADS)

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E.; Dickerson, James H.

    2011-04-01

    Our work reports a facile route to colloidal Tenanocrystals with binary uniform size distributions at room temperature. The binary-sized Tenanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated.Our work reports a facile route to colloidal Tenanocrystals with binary uniform size distributions at room temperature. The binary-sized Tenanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated. Electronic supplementary information (ESI) available: Synthetic procedures, FTIR analysis, ED pattern, AFM image, and EPD current curve. See DOI: 10.1039/c1nr10025d

  19. Maintaining Traceability in an Evolving Distributed Computing Environment

    NASA Astrophysics Data System (ADS)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  20. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    PubMed

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  1. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  2. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  3. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  4. Parallelizing Sylvester-like operations on a distributed memory computer

    SciTech Connect

    Hu, D.Y.; Sorensen, D.C.

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  5. Distributed Sensor Network With Collective Computation For Situational Awareness

    NASA Astrophysics Data System (ADS)

    Dreicer, Jared S.; Jorgensen, Anders M.; Dors, Eric E.

    2002-10-01

    Initiated under Laboratory Directed R&D funding we have engaged in empirical studies, theory development, and initial hardware development for a ground-based Distributed Sensor Network with Collective Computation (DSN-CC). A DSN-CC is a network that uses node-to-node communication and on-board processing to achieve gains in response time, power usage, communication bandwidth, detection resolution, and robustness. DSN-CCs are applicable to both military and civilian problems where massive amounts of data gathered over a large area must be processed to yield timely conclusions. We have built prototype hardware DSN-CC nodes. Each node has self-contained power and is 6"×10"×2". Each node contains a battery pack with power feed from a solar panel that forms the lid, a central processing board, a GPS card, and radio card. Further system properties will be discussed, as will scenarios in which the system might be used to counter Nuclear/Biological/Chemical (NBC) threats of unconventional warfare. Mid-year in FY02 this DSN-CC research project received funding from the Office of Nonproliferation Research and Engineering (NA-22), NNSA to support nuclear proliferation technology development.

  6. High threshold distributed quantum computing with three-qubit nodes

    NASA Astrophysics Data System (ADS)

    Li, Ying; Benjamin, Simon C.

    2012-09-01

    In the distributed quantum computing paradigm, well-controlled few-qubit ‘nodes’ are networked together by connections which are relatively noisy and failure prone. A practical scheme must offer high tolerance to errors while requiring only simple (i.e. few-qubit) nodes. Here we show that relatively modest, three-qubit nodes can support advanced purification techniques and so offer robust scalability: the infidelity in the entanglement channel may be permitted to approach 10% if the infidelity in local operations is of order 0.1%. Our tolerance of network noise is therefore an order of magnitude beyond prior schemes, and our architecture remains robust even in the presence of considerable decoherence rates (memory errors). We compare the performance with that of schemes involving nodes of lower and higher complexity. Ion traps, and NV-centres in diamond, are two highly relevant emerging technologies: they possess the requisite properties of good local control, rapid and reliable readout, and methods for entanglement-at-a-distance.

  7. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  8. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect

    Johnson, A.J.

    1991-12-01

    Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90`s. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN`s so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.

  9. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect

    Johnson, A.J.

    1991-12-01

    Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90's. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN's so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.

  10. An Evaluation of Biosurveillance Grid—Dynamic Algorithm Distribution Across Multiple Computer Nodes

    PubMed Central

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M.

    2007-01-01

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network. PMID:18693936

  11. 26 CFR 1.1247-2 - Computation and distribution of taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Computation and distribution of taxable income....1247-2 Computation and distribution of taxable income. (a) In general. Taxable income of a foreign... such taxable year as a distribution made during such taxable year of such taxable income. The...

  12. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  13. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX.

    SciTech Connect

    Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division

    2009-06-09

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the

  14. Managing to Change: The Wharton School's Distributed Staff Model for Computing Support.

    ERIC Educational Resources Information Center

    Eleey, Michael

    1993-01-01

    The University of Pennsylvania's Wharton School introduced a "distributed" organization for managing computing support services. The hybrid structure combined elements of centralized computing and departmental computing by placing computing personnel in the departments, under central management. The program covers a wide range of support services…

  15. Impact of Distributed Energy Resources on the Reliability of a Critical Telecommunications Facility

    SciTech Connect

    Robinson, D.; Atcitty, C.; Zuffranieri, J.; Arent, D.

    2006-03-01

    Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages, as documented by analyses of Federal Communications Commission (FCC) outage reports by the National Reliability Steering Committee (under auspices of the Alliance for Telecommunications Industry Solutions). There are two major issues that are having increasing impact on the sensitivity of the power distribution to telecommunication facilities: deregulation of the power industry, and changing weather patterns. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. But does the diversity in power sources actually increase the reliability of offered power to the office equipment, or does the complexity of installing and managing the extended power system induce more potential faults and higher failure rates? This report analyzes a system involving a telecommunications facility consisting of two switch-bays and a satellite reception system.

  16. Asteroids@home-A BOINC distributed computing project for asteroid shape reconstruction

    NASA Astrophysics Data System (ADS)

    Ďurech, J.; Hanuš, J.; Vančo, R.

    2015-11-01

    We present the project Asteroids@home that uses distributed computing to solve the time-consuming inverse problem of shape reconstruction of asteroids. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework to distribute, collect, and validate small computational units that are solved independently at individual computers of volunteers connected to the project. Shapes, rotational periods, and orientations of the spin axes of asteroids are reconstructed from their disk-integrated photometry by the lightcurve inversion method.

  17. Computer simulation of an alternate-energy-based, high-density brooding facility

    SciTech Connect

    Simmons, J.D.

    1986-01-01

    A computer model was developed to simulate a poultry brooding facility characterized by high-density cage or floor brooding, environmental housing, ventilation heat recovery, solar energy collection, and biogas generation. Repeated simulations revealed the following: (1) Solar collection and ventilation heat recovery could reduce fossil fuel use by 12 and 91%, respectively. Combining solar collection and heat recovery may reduce fossil fuel use by only an additional 1.5%. (2) Methane generation can provide more energy on a yearly basis than is required for supplemental heat for brooding. Seasonal energy demands do not match supplies from methane generation and shortages may occur in winter as well as excesses in summer. A digester operated in the thermophilic temperature range produces more net energy than one operated in the mesophilic range. (3) Operating expenses for the simulated cage facility exceeded conventional brooding. (4) Relative humidity patterns of certain areas create the need for complex controls to properly maintain the internal environment. (5) Feed and fuel account for nearly 100% of the operating expenses of brooding. Controlling heat and ventilation with a microprocessor may be the only way to optimize the environment of a broiler brooding facility.

  18. Comparison of airborne and surface particulate size distributions in specific Hanford Nuclear Facilities

    SciTech Connect

    Ottley, D.B.

    1995-05-01

    Settled dust from nuclear operations may be contaminated with radionuclides and become resuspended and subsequently breathed. This is the predominate radionuclide inhalation hazard scenario in nuclear facilities that have been deactivated and no longer have liquid in their process systems that may become directly airborne in accident situations. Comparisons were made between indoor ambient airborne particulate size distribution and that of resuspended dust that could become contaminated and subsequently airborne during decommissioning operations at selected nuclear facilities on the Hanford Site. Results indicate that only 5% of the particles, by count, above the breathing zone are greater than ten (10) {mu}m in size and that the particulates that could be resuspended into the breathing zone had a mean aerodynamic equivalent diameter of four (4) {mu}m or less.

  19. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  20. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  1. Remote access of the ILLIAC 4. [computer flow distribution simulations

    NASA Technical Reports Server (NTRS)

    Stevens, K. G., Jr.

    1975-01-01

    The ILLIAC-4 hardware is described. The Illiac system, the Advanced Research Projects Agency computer network, and IMLAC PDS-1 are included. The space shuttle flow simulation is demonstrated to show the feasibility of using an advanced computer from a remote location.

  2. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  3. Achieving production-level use of HEP software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.

    2015-12-01

    HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.

  4. Raman distributed temperature measurement at CERN high energy accelerator mixed field radiation test facility (CHARM)

    NASA Astrophysics Data System (ADS)

    Toccafondo, Iacopo; Nannipieri, Tiziano; Signorini, Alessandro; Guillermain, Elisa; Kuhnhenn, Jochen; Brugger, Markus; Di Pasquale, Fabrizio

    2015-09-01

    In this paper we present a validation of distributed Raman temperature sensing (RDTS) at the CERN high energy accelerator mixed field radiation test facility (CHARM), newly developed in order to qualify electronics for the challenging radiation environment of accelerators and connected high energy physics experiments. By investigating the effect of wavelength dependent radiation induced absorption (RIA) on the Raman Stokes and anti-Stokes light components in radiation tolerant Ge-doped multi-mode (MM) graded-index optical fibers, we demonstrate that Raman DTS used in loop configuration is robust to harsh environments in which the fiber is exposed to a mixed radiation field. The temperature profiles measured on commercial Ge-doped optical fibers is fully reliable and therefore, can be used to correct the RIA temperature dependence in distributed radiation sensing systems based on P-doped optical fibers.

  5. Spatially Resolved Temperature and Water Vapor Concentration Distributions in Supersonic Combustion Facilities by TDLAT

    NASA Technical Reports Server (NTRS)

    Busa, K. M.; McDaniel J. C.; Diskin, G. S.; DePiro, M. J.; Capriotti, D. P.; Gaffney, R. L.

    2012-01-01

    Detailed knowledge of the internal structure of high-enthalpy flows can provide valuable insight to the performance of scramjet combustors. Tunable Diode Laser Absorption Spectroscopy (TDLAS) is often employed to measure temperature and species concentration. However, TDLAS is a path-integrated line-of-sight (LOS) measurement, and thus does not produce spatially resolved distributions. Tunable Diode Laser Absorption Tomography (TDLAT) is a non-intrusive measurement technique for determining two-dimensional spatially resolved distributions of temperature and species concentration in high enthalpy flows. TDLAT combines TDLAS with tomographic image reconstruction. More than 2500 separate line-of-sight TDLAS measurements are analyzed in order to produce highly resolved temperature and species concentration distributions. Measurements have been collected at the University of Virginia's Supersonic Combustion Facility (UVaSCF) as well as at the NASA Langley Direct-Connect Supersonic Combustion Test Facility (DCSCTF). Due to the UVaSCF s unique electrical heating and ability for vitiate addition, measurements collected at the UVaSCF are presented as a calibration of the technique. Measurements collected at the DCSCTF required significant modifications to system hardware and software designs due to its larger measurement area and shorter test duration. Tomographic temperature and water vapor concentration distributions are presented from experimentation on the UVaSCF operating at a high temperature non-reacting case for water vitiation level of 12%. Initial LOS measurements from the NASA Langley DCSCTF operating at an equivalence ratio of 0.5 are also presented. Results show the capability of TDLAT to adapt to several experimental setups and test parameters.

  6. The Development of a Computer Assisted Distribution and Assignment (CADA) System for Navy Enlisted Personnel.

    ERIC Educational Resources Information Center

    Whitehead, Randall F.; And Others

    This report describes the development of a computerized system to assist Navy personnel managers in carrying out the functions associated with the distribution and assignment of enlisted personnel. This Computer Assisted Distribution and Assignment (CADA) System is aimed at the most efficient interaction between the computer and human manager to…

  7. High-performance, distributed computing software libraries and services

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  8. Overview of the human brain as a distributed computing network

    SciTech Connect

    Gevins, A.S.

    1983-01-01

    The hierarchically organized human brain is viewed as a prime example of a massively parallel, adaptive information processing and process control system. A brief overview of the human brain is provided for computer architects, in hopes that the principles of massive parallelism, dense connectivity and self-organization of assemblies of processing elements will prove relevant to the design of fifth generation VLSI computing networks. 6 references.

  9. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times

  10. Availability and distribution of safe abortion services in rural areas: a facility assessment study in Madhya Pradesh, India

    PubMed Central

    Chaturvedi, Sarika; Ali, Sayyed; Randive, Bharat; Sabde, Yogesh; Diwan, Vishal; De Costa, Ayesha

    2015-01-01

    Background Unsafe abortion contributes to a significant portion of maternal mortality in India. Access to safe abortion care is known to reduce maternal mortality. Availability and distribution of abortion care facilities can influence women's access to these services, especially in rural areas. Objectives To assess the availability and distribution of abortion care at facilities providing childbirth care in three districts of Madhya Pradesh (MP) province of India. Design Three socio demographically heterogeneous districts of MP were selected for this study. Facilities conducting at least 10 deliveries a month were surveyed to assess availability and provision of abortion services using UN signal functions for emergency obstetric care. Geographical Information System was used for visualisation of the distribution of facilities. Results The three districts had 99 facilities that conducted >10 deliveries a month: 74 in public and 25 in private sector. Overall, 48% of facilities reported an ability to provide safe surgical abortion service. Of public centres, 32% reported the ability compared to 100% among private centres while 18% of public centres and 77% of private centres had performed an abortion in the last 3 months. The availability of abortion services was higher at higher facility levels with better equipped and skilled personnel availability, in urban areas and in private sector facilities. Conclusions Findings showed that availability of safe abortion care is limited especially in rural areas. More emphasis on providing safe abortion services, particularly at primary care level, is important to more significantly dent maternal mortality in India. PMID:25797220

  11. Assessment of the Distribution of Toxic Release Inventory Facilities in Metropolitan Charleston: An Environmental Justice Case Study

    PubMed Central

    Fraser-Rahim, Herb; Williams, Edith; Zhang, Hongmei; Rice, LaShanta; Svendsen, Erik; Abara, Winston

    2012-01-01

    Objectives. We assessed spatial disparities in the distribution of Toxic Release Inventory (TRI) facilities in Charleston, SC. Methods. We used spatial methods and regression to assess burden disparities in the study area at the block and census-tract levels by race/ethnicity and socioeconomic status (SES). Results. Results revealed an inverse relationship between distance to TRI facilities and race/ethnicity and SES at the block and census-tract levels. Results of regression analyses showed a positive association between presence of TRI facilities and high percentage non-White and a negative association between number of TRI facilities and high SES. Conclusions. There are burden disparities in the distribution of TRI facilities in Charleston at the block and census-tract level by race/ethnicity and SES. Additional research is needed to understand cumulative risk in the region. PMID:22897529

  12. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Astrophysics Data System (ADS)

    Schuster, David M.

    1993-04-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  13. A design study for the upgraded ALICE O2 computing facility

    NASA Astrophysics Data System (ADS)

    Richter, Matthias

    2015-12-01

    An upgrade of the ALICE detector is currently prepared for the Run 3 period of the Large Hadron Collider (LHC) at CERN starting in 2020. The physics topics under study by ALICE during this period will require the inspection of all collisions at a rate of 50 kHz for minimum bias Pb-Pb and 200 kHz for pp and p-Pb collisions in order to extract physics signals embedded into a large background. The upgraded ALICE detector will produce more than 1 TByte/s of data. Both collision and data rate impose new challenges onto the detector readout and compute system. Some detectors will not use a triggered readout, which will require a continuous processing of the detector data. The challenging requirements will be met by a combined online and offline facility developed and managed by the ALICE O2 project. The combined facility will accommodate the necessary substantial increase of data taking rate. In this paper we present first results of a prototype with estimates for scalability and feasibility for a full scale system.

  14. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  15. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  16. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  17. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  18. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  19. A study of standard building blocks for the design of fault-tolerant distributed computer systems

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.; Avizienis, A.; Ercegovac, M.

    1978-01-01

    This paper presents the results of a study that has established a standard set of four semiconductor VLSI building-block circuits. These circuits can be assembled with off-the-shelf microprocessors and semiconductor memory modules into fault-tolerant distributed computer configurations. The resulting multi-computer architecture uses self-checking computer modules backed up by a limited number of spares. A redundant bus system is employed for communication between computer modules.

  20. Computing pressure distributions in wedges and pinch-outs

    SciTech Connect

    Chih-Cheng Chen; Raghaven, R.

    1995-12-31

    A solution for wedge-type systems in terms of the Laplace transformation is derived. Characteristics of responses are discussed and computational issues are addressed. The algorithm given here is a practical tool for analyzing flows in wedge-type systems and may be incorporated immediately into existing software packages. Existing solutions are a subset of the solution given here.

  1. Distributed computing for autonomous on board planning and sequence validations

    NASA Technical Reports Server (NTRS)

    Ko, A. Y.; Alkalai, L.; Chau, S.; Cheung, K.; Tong, D.; Maldague, P. F.

    2002-01-01

    We propose a new conceptual approach to system-level autonomy that exploits in a synergistic way recent breakthroughs in three specific areas: automatic generation of embeddable planning and validation software, integration of telecommunications forecaster and planning tools, and fault-tolerant assignment of computing tasks to multiple processors.

  2. Polytopol computing for multi-core and distributed systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  3. First thoughts on KM3NeT on-shore data storage and distribution facilities

    NASA Astrophysics Data System (ADS)

    Stavrianakou, M.

    2009-04-01

    The KM3NeT project studies the design of an underwater neutrino telescope combined with a multidisciplinary underwater observatory in the Mediterranean. Data from the telescope will arrive on shore where they will be processed in real time at a data filter farm and subsequently stored and backed up at a central computing centre located on site. From there we propose a system whereby the data are distributed to participating institutes equipped with large computing centres for further processing, duplication and distribution to smaller centres. The data taking site hosts the central data management services, including the database servers, bookkeeping systems and file catalogue services, the data access and file transfer systems, data quality monitoring systems and transaction monitoring daemons and is equipped with fast network connection to all large computing sites. Data and service challenges in the course of the preparatory phase must be anticipated in order to test the hardware and software components in terms of robustness and performance, scalability as well as modularity and replaceability, given the rapid evolution of the market both in terms of CPU performance and storage capacity. The role of the GRID would also have to be evaluated and the appropriate implementation selected on time for an eventual test in the context of a data challenge before the start of data taking.

  4. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  5. Experimental and computational studies of fatty acid distribution networks.

    PubMed

    Liu, Yong; Buendía-Rodríguez, Germán; Peñuelas-Rívas, Claudia Giovanna; Tan, Zhiliang; Rívas-Guevara, María; Tenorio-Borroto, Esvieta; Munteanu, Cristian R; Pazos, Alejandro; González-Díaz, Humberto

    2015-11-01

    Unbalanced uptake of Omega 6/Omega 3 (ω-6/ω-3) ratios could increase chronic disease occurrences, such as inflammation, atherosclerosis, or tumor proliferation, and methylation methods for measuring the ruminal microbiome fatty acid (FA) composition/distribution play a vital role in discovering the contribution of food components to ruminant products (e.g., meat and milk) when pursuing a healthy diet. Hansch's models based on Linear Free Energy Relationships (LFERs) using physicochemical parameters, such as partition coefficients, molar refractivity, and polarizability, as input variables (Vk) are advocated. In this work, a new combined experimental and theoretical strategy was proposed to study the effect of ω-6/ω-3 ratios, FA chemical structure, and other factors over FA distribution networks in the ruminal microbiome. In step 1, experiments were carried out to measure long chain fatty acid (LCFA) profiles in the rumen microbiome (bacterial and protozoan), and volatile fatty acids (VFAs) in fermentation media. In step 2, the proportions and physicochemical parameter values of LCFAs and VFAs were calculated under different boundary conditions (cj) like c1 = acid and/or base methylation treatments, c2 = with/without fermentation, c3 = FA distribution phase (media, bacterial, or protozoan microbiome), etc. In step 3, Perturbation Theory (PT) and LFER ideas were combined to develop a PT-LFER model of a FA distribution network using physicochemical parameters (V(k)), the corresponding Box-Jenkins (ΔV(kj)) and PT operators (ΔΔV(kj)) in statistical analysis. The best PT-LFER model found predicted the effects of perturbations over the FA distribution network with sensitivity, specificity, and accuracy > 80% for 407 655 cases in training + external validation series. In step 4, alternative PT-LFER and PT-NLFER models were tested for training Linear and Non-Linear Artificial Neural Networks (ANNs). PT-NLFER models based on ANNs presented better performance but are

  6. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  7. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  8. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  9. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  10. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  11. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  12. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  13. Impact of distributed energy resources on the reliability of a critical telecommunications facility.

    SciTech Connect

    Robinson, David; Zuffranieri, Jason V.; Atcitty, Christopher B.; Arent, Douglas

    2006-03-01

    This report documents a probabilistic risk assessment of an existing power supply system at a large telecommunications office. The focus is on characterizing the increase in the reliability of power supply through the use of two alternative power configurations. Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.

  14. Radon gas distribution in natural gas processing facilities and workplace air environment.

    PubMed

    Al-Masri, M S; Shwiekani, R

    2008-04-01

    Evaluation was made of the distribution of radon gas and radiation exposure rates in the four main natural gas treatment facilities in Syria. The results showed that radiation exposure rates at contact of all equipment were within the natural levels (0.09-0.1 microSvh(-1)) except for the reflex pumps where a dose rate value of 3 microSvh(-1) was recorded. Radon concentrations in Syrian natural gas varied between 15.4 Bq m(-3) and 1141 Bq m(-3); natural gas associated with oil production was found to contain higher concentrations than the non-associated natural gas. In addition, radon concentrations were higher in the central processing facilities than the wellheads; these high levels are due to pressurizing and concentrating processes that enhance radon gas and its decay products. Moreover, the lowest 222Rn concentration was in the natural gas fraction used for producing sulfur; a value of 80 Bq m(-3) was observed. On the other hand, maximum radon gas and its decay product concentrations in workplace air environments were found to be relatively high in the gas analysis laboratories; a value of 458 Bq m(-3) was observed. However, all reported levels in the workplaces in the four main stations were below the action level set by IAEA for chronic exposure situations involving radon, which is 1000 Bq m(-3). PMID:17905489

  15. Radon gas distribution in natural gas processing facilities and workplace air environment.

    PubMed

    Al-Masri, M S; Shwiekani, R

    2008-04-01

    Evaluation was made of the distribution of radon gas and radiation exposure rates in the four main natural gas treatment facilities in Syria. The results showed that radiation exposure rates at contact of all equipment were within the natural levels (0.09-0.1 microSvh(-1)) except for the reflex pumps where a dose rate value of 3 microSvh(-1) was recorded. Radon concentrations in Syrian natural gas varied between 15.4 Bq m(-3) and 1141 Bq m(-3); natural gas associated with oil production was found to contain higher concentrations than the non-associated natural gas. In addition, radon concentrations were higher in the central processing facilities than the wellheads; these high levels are due to pressurizing and concentrating processes that enhance radon gas and its decay products. Moreover, the lowest 222Rn concentration was in the natural gas fraction used for producing sulfur; a value of 80 Bq m(-3) was observed. On the other hand, maximum radon gas and its decay product concentrations in workplace air environments were found to be relatively high in the gas analysis laboratories; a value of 458 Bq m(-3) was observed. However, all reported levels in the workplaces in the four main stations were below the action level set by IAEA for chronic exposure situations involving radon, which is 1000 Bq m(-3).

  16. [Computer simulated images of radiopharmaceutical distributions in anthropomorphic phantoms

    SciTech Connect

    Not Available

    1991-05-17

    We have constructed an anatomically correct human geometry, which can be used to store radioisotope concentrations in 51 various internal organs. Each organ is associated with an index number which references to its attenuating characteristics (composition and density). The initial development of Computer Simulated Images of Radiopharmaceuticals in Anthropomorphic Phantoms (CSIRDAP) over the first 3 years has been very successful. All components of the simulation have been coded, made operational and debugged.

  17. Gravitational field calculations on a dynamic lattice by distributed computing.

    NASA Astrophysics Data System (ADS)

    Mähönen, P.; Punkka, V.

    A new method of calculating numerically time evolution of a gravitational field in general relativity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  18. Gravitation Field Calculations on a Dynamic Lattice by Distributed Computing

    NASA Astrophysics Data System (ADS)

    Mähönen, Petri; Punkka, Veikko

    A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  19. Calculation of reflectance distribution using angular spectrum convolution in mesh-based computer generated hologram.

    PubMed

    Yeom, Han-Ju; Park, Jae-Hyeung

    2016-08-22

    We propose a method to obtain a computer-generated hologram that renders reflectance distributions of individual mesh surfaces of three-dimensional objects. Unlike previous methods which find phase distribution inside each mesh, the proposed method performs convolution of angular spectrum of the mesh to obtain desired reflectance distribution. Manipulation in the angular spectrum domain enables its application to fully-analytic mesh based computer generated hologram, removing the necessity for resampling of the spatial frequency grid. It is also computationally inexpensive as the convolution can be performed efficiently using Fourier transform. In this paper, we present principle, error analysis, simulation, and experimental verification results of the proposed method.

  20. Chemical fate and transport of atrazine in soil gravel materials at agrichemical distribution facilities

    USGS Publications Warehouse

    Roy, W.R.; Krapac, I.G.; Chou, S.-F.J.

    1999-01-01

    The gravel commonly used to cover parking lots and roadways at retail agrichemical facilities may contain relatively large concentrations of pesticides that resulted from past management problems. These pesticides may threaten groundwater quality. Previous studies, however, suggested that the pesticides had not moved from the gravel in several sample profiles. Excavations at a closed facility revealed tremendous variability in pesticide distribution within the site. Pesticides were present below the gravel in two profiles, but the mechanism(s) for their movement were not clear. The objectives of this study were to investigate how the physical and chemical properties of the gravel influence the environmental fate of atrazine. All of the gravel samples collected and characterized contained atrazine and sufficient organic C to adsorb significant amounts of atrazine, thus retarding its movement through the gravel. Laboratory column leaching experiments, however, suggested that much of the atrazine should leach from the gravel within a year or two. A field-scale test plot was constructed to study how atrazine moves through the gravel under controlled conditions. Atrazine was "spilled" in the test plot. Atrazine moved from the gravel both vertically and horizontally. It appears that formulated product spilled on gravel will leach. A single discrete spill can give rise to phantom spills whose occurrence and distribution is not related to any specific pesticide-management practice. The apparent lack of atrazine leaching from gravel appeared to be a transient phenomenon and/or the result of sampling limitations in previous studies. The contaminated gravel clearly poses a risk to groundwater quality.

  1. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on

  2. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  3. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  4. Privacy and security requirements of distributed computer based patient records.

    PubMed

    Moehr, J R

    1994-02-01

    Privacy and security issues increase in complexity as we move from the conventional patient record to the computer based patient record (CPR) supporting patient care and to cross-institutional networked CPRs. The privacy and security issues surrounding the CPR are outlined. Measures for privacy and security protection are summarized. It is suggested that we lack a key component of an information sharing culture. We need means for semantic indexing in the form of a metadata base at the level of the instantiation of a data base rather than at the level of its schemas.

  5. A techno-economic analysis of using mobile distributed pyrolysis facilities to deliver a forest residue resource.

    PubMed

    Brown, Duncan; Rowe, Andrew; Wild, Peter

    2013-12-01

    Distributed mobile conversion facilities using either fast pyrolysis or torrefaction processes can be used to convert forest residues to more energy dense substances (bio-oil, bio-slurry or torrefied wood) that can be transported as feedstock for bio-fuel facilities. Results show that the levelised delivered cost of a forest residue resource using mobile facility networks can be lower than using conventional woodchip delivery methods under appropriate conditions. Torrefied wood is the lowest cost pathway of delivering a forest residue resource when using mobile facilities. Cost savings occur against woodchip delivery for annual forest residue harvests above 2.5 million m(3) or when transport distances greater than 300 km are required. Important parameters that influence levelised delivered costs are transport distances (forest residue spatial density), haul cost factors, and initial moisture content of forest residues. Relocating mobile facilities can be optimised for lowest cost delivery as transport distances of raw biomass are reduced.

  6. Sociospatial distribution of access to facilities for moderate and vigorous intensity physical activity in Scotland by different modes of transport

    PubMed Central

    2012-01-01

    Background People living in neighbourhoods of lower socioeconomic status have been shown to have higher rates of obesity and a lower likelihood of meeting physical activity recommendations than their more affluent counterparts. This study examines the sociospatial distribution of access to facilities for moderate or vigorous intensity physical activity in Scotland and whether such access differs by the mode of transport available and by Urban Rural Classification. Methods A database of all fixed physical activity facilities was obtained from the national agency for sport in Scotland. Facilities were categorised into light, moderate and vigorous intensity activity groupings before being mapped. Transport networks were created to assess the number of each type of facility accessible from the population weighted centroid of each small area in Scotland on foot, by bicycle, by car and by bus. Multilevel modelling was used to investigate the distribution of the number of accessible facilities by small area deprivation within urban, small town and rural areas separately, adjusting for population size and local authority. Results Prior to adjustment for Urban Rural Classification and local authority, the median number of accessible facilities for moderate or vigorous intensity activity increased with increasing deprivation from the most affluent or second most affluent quintile to the most deprived for all modes of transport. However, after adjustment, the modelling results suggest that those in more affluent areas have significantly higher access to moderate and vigorous intensity facilities by car than those living in more deprived areas. Conclusions The sociospatial distributions of access to facilities for both moderate intensity and vigorous intensity physical activity were similar. However, the results suggest that those living in the most affluent neighbourhoods have poorer access to facilities of either type that can be reached on foot, by bicycle or by bus than

  7. Computational investigation of the discharge coefficient of bellmouth flow meters in engine test facilities

    NASA Astrophysics Data System (ADS)

    Sebourn, Charles Lynn

    2002-11-01

    In this thesis computation of the discharge coefficient of bellmouth flow meters installed in engine test facilities is presented. The discharge coefficient is a critical parameter for accurately calculating flow rate in any flow meter which operates by means of creating a pressure differential. Engine airflow is a critical performance parameter and therefore, it is necessary for engine test facilities to accurately measure airflow. In this report the author investigates the use of computational fluid dynamics using finite difference methods to calculate the flow in bellmouth flow meters and hence the discharge coefficient at any measurement station desired. Experimental boundary layer and core flow data was used to verify the capability of the WIND code to calculate the discharge coefficient accurately. Good results were obtained for Reynolds numbers equal to or greater than about three million which is the primary range of interest. After verifying the WIND code performance, results were calculated for a range of Reynolds numbers and Mach numbers. Also the variation in discharge coefficient as a function of measurement location was examined. It is demonstrated that by picking the proper location for pressure measurement, sensitivity to measurement location can be minimized. Also of interest was the effect of bellmouth geometry. Calculations were performed to investigate the effect of duct to bellmouth diameter ratio and the eccentricity of the bellmouth contraction. In general the effects of the beta ratio were seen to be quite small. For the eccentricity, the variation in discharge coefficient was as high as several percent for axial locations less than half a diameter downstream from the throat. The second portion of the thesis examined the effect of a turbofan engine stationed just downstream of the bellmouth flow meter. The study approximated this effect by examining a single fan stage installed in the duct. This calculation was performed by making use of a

  8. Applications Analysis: Principles and Examples from Various Distributed Computer Applications at Sandia National Laboratories New Mexico

    SciTech Connect

    Bateman, Dennis; Evans, David; Jensen, Dal; Nelson, Spencer

    1999-08-01

    As information systems have become distributed over many computers within the enterprise, managing those applications has become increasingly important. This is an emerging area of work, recognized as such by many large organizations as well as many start-up companies. In this report, we present a summary of the move to distributed applications, some of the problems that came along for the ride, and some specific examples of the tools and techniques we have used to analyze distributed applications and gain some insight into the mechanics and politics of distributed computing.

  9. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  10. Feasibility Study for a Remote Terminal Central Computing Facility Serving School and College Institutions. Volume II, Preliminary Specifications.

    ERIC Educational Resources Information Center

    International Business Machines Corp., White Plains, NY.

    Preliminary specifications of major equipment and programing systems characteristics for a remote terminal central computing facility serving 25-75 secondary schools are presented. Estimation techniques developed in a previous feasibility study were used to delineate workload demands for four model regions with different numbers of institutions…

  11. Impact of Nitrification on the Formation of N-Nitrosamines and Halogenated Disinfection Byproducts within Distribution System Storage Facilities.

    PubMed

    Zeng, Teng; Mitch, William A

    2016-03-15

    Distribution system storage facilities are a critical, yet often overlooked, component of the urban water infrastructure. This study showed elevated concentrations of N-nitrosodimethylamine (NDMA), total N-nitrosamines (TONO), regulated trihalomethanes (THMs) and haloacetic acids (HAAs), 1,1-dichloropropanone (1,1-DCP), trichloroacetaldehyde (TCAL), haloacetonitriles (HANs), and haloacetamides (HAMs) in waters with ongoing nitrification as compared to non-nitrifying waters in storage facilities within five different chloraminated drinking water distribution systems. The concentrations of NDMA, TONO, HANs, and HAMs in the nitrifying waters further increased upon application of simulated distribution system chloramination. The addition of a nitrifying biofilm sample collected from a nitrifying facility to its non-nitrifying influent water led to increases in N-nitrosamine and halogenated DBP formation, suggesting the release of precursors from nitrifying biofilms. Periodic treatment of two nitrifying facilities with breakpoint chlorination (BPC) temporarily suppressed nitrification and reduced precursor levels for N-nitrosamines, HANs, and HAMs, as reflected by lower concentrations of these DBPs measured after re-establishment of a chloramine residual within the facilities than prior to the BPC treatment. However, BPC promoted the formation of halogenated DBPs while a free chlorine residual was maintained. Strategies that minimize application of free chlorine while preventing nitrification are needed to control DBP precursor release in storage facilities.

  12. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  13. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    NASA Astrophysics Data System (ADS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  14. Innovation of laboratory exercises in course Distributed systems and computer networks

    NASA Astrophysics Data System (ADS)

    Souček, Pavel; Slavata, Oldřich; Holub, Jan

    2013-09-01

    This paper is focused on innovation of laboratory exercises in course Distributed Systems and Computer Networks. These exercises were introduced in November of 2012 and replaced older exercises in order to reflect real life applications.

  15. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.

  16. DISTRIBUTION OF LEGIONELLA PNEUMOPHILA SEROGROUPS ISOLATED FROM WATER SYSTEMS OF PUBLIC FACILITIES IN BUSAN, SOUTH KOREA.

    PubMed

    Hwang, In-Yeong; Park, Eun-Hee; Park, Yon-Koung; Park, Sun-Hee; Sung, Gyung-Hye; Park, Hye-Young; Lee, Young-Choon

    2016-05-01

    Legionella pneumophila is the major causes of legionellosis worldwide. The distribution of L. pneumophila was investigated in water systems of public facilities in Busan, South Korea during 2007 and 2013-2014. L. pneumophila was isolated from 8.3% of 3,055 samples, of which the highest isolation rate (49%) was from ships and the lowest 4% from fountains. Serogroups of L. pneumophila isolated in 2007 were distributed among serogroups (sgs) 1-7 with the exception of sg 4, while those of isolates during 2013 and 2014 included also 11 sgs ( 1, 2, 3, 4, 5, 6, 7, 8, 12, 13, 15). L. pneumophila sg 1 was predominated among isolates from fountains (75%), hotels (60%), buildings (44%), hospitals (38%), and public baths (37%), whereas sg 3 and sg 7 was the most prevalent from ships (46%) and factories (40%), respectively. The predominated serogroup of L. pneumophila isolates from hot and cooling tower water was sg 1 (35% and 46%, respectively), while from cold water was sg 3 (29%). These results should be useful for epidemiological surveys to identify sources of outbreaks of legionellosis in Busan, South Korea. PMID:27405130

  17. Sentinel-1 Data System at the Alaska Satellite Facility Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Wolf, V. G.

    2014-12-01

    The Alaska Satellite Facility Distributed Active Archive Center (ASF DAAC) has a long history of supporting international collaborations between NASA and foreign flight agencies to promote access to Synthetic Aperture Radar (SAR) data for US science research. Based on the agreement between the US and the EC, data from the Sentinel missions will be distributed by NASA through archives that mirror those established by ESA. The ASF DAAC is the designated archive and distributor for Sentinel-1 data. The data will be copied from the ESA archive to a rolling archive at the NASA Goddard center, and then pushed to a landing area at the ASF DAAC. The system at ASF DAAC will take the files as they arrive and put them through an ingest process. Ingest will populate the database with the information required to enable search and download of the data through Vertex, the ASF DAAC user interface. Metadata will be pushed to the NASA Common Metadata Repository, enabling data discovery through clients that utilize the repository. Visual metadata will be pushed to the NASA GIBS system for visualization through clients linked to that system. Data files will be archived in the DataDirect Networks (DDN) device that is the primary storage device for the ASF DAAC. A backup copy of the data will be placed in a second DDN device that serves as the disaster recovery solution for the ASF DAAC.

  18. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  19. Survivable algorithms and redundancy management in NASA's distributed computing systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw

    1992-01-01

    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.

  20. Learning general phonological rules from distributional information: a computational model.

    PubMed

    Calamaro, Shira; Jarosz, Gaja

    2015-04-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles.

  1. Computer-aided design drafting/manufacturing (CADD/M) facility preparation

    SciTech Connect

    Norton, F.J.

    1980-09-23

    Computer-Aided Design, Drafting and Manufacturing (CADD/M) equipment requires careful facilities preparation before installation takes place. This paper presents what a company should consider to ensure a proper installation. This includes consideration of working conditions. To get the most out of the system, the operators must be provided with a relaxed, comfortable environment, free from noise and other distractions. Such things as temperature requirements, lighting, power, security and fire protection are discussed. Also, future expansion needs are considered so that major construction will not be required for future years. Advanced planning in these areas is needed to ensure successful implementation of a CADD/M system. This will lead to considerable cost savings, and in the long run, improve the scheduling for an entire project, from initial design to final production. This careful preparation will minimize unplanned events and problem areas. These are ambitious goals but easily realized if a logical and rational plan is adopted in the same manner as that used in a typical development program.

  2. Collaborative Strategic Board Games as a Site for Distributed Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative--that is, a game requires that players work in joint pursuit of a shared goal--the computational thinking is easily observed as distributed across…

  3. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  4. A Framework for a Computer System to Support Distributed Cooperative Learning

    ERIC Educational Resources Information Center

    Chiu, Chiung-Hui

    2004-01-01

    To develop a computer system to support cooperative learning among distributed students; developers should consider the foundations of cooperative learning. This article examines the basic elements that make cooperation work and proposes a framework for such computer supported cooperative learning (CSCL) systems. This framework is constituted of…

  5. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  6. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  7. Pit Distribution Design for Computer-Generated Waveguide Holography

    NASA Astrophysics Data System (ADS)

    Yagi, Shogo; Imai, Tadayuki; Ueno, Masahiro; Ohtani, Yoshimitsu; Endo, Masahiro; Kurokawa, Yoshiaki; Yoshikawa, Hiroshi; Watanabe, Toshifumi; Fukuda, Makoto

    2008-02-01

    Multilayered waveguide holography (MWH) is one of a number of page-oriented data multiplexing holographies that will be applied to optical data storage and three-dimensional (3D) moving images. While conventional volumetric holography using photopolymer or photorefractive materials requires page-by-page light exposure for recording, MWH media can be made by employing stamping and laminating technologies that are suitable for mass production. This makes devising an economical mastering technique for replicating holograms a key issue. In this paper, we discuss an approach to pit distribution design that enables us to replace expensive electron beam mastering with economical laser beam mastering. We propose an algorithm that avoids the overlapping of even comparatively large adjacent pits when we employ laser beam mastering. We also compensate for the angular dependence of the diffraction power, which strongly depends on pit shape, by introducing an enhancement profile so that a diffracted image has uniform intensity.

  8. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  9. A European Federated Cloud: Innovative distributed computing solutions by EGI

    NASA Astrophysics Data System (ADS)

    Sipos, Gergely; Turilli, Matteo; Newhouse, Steven; Kacsuk, Peter

    2013-04-01

    The European Grid Infrastructure (EGI) is the result of pioneering work that has, over the last decade, built a collaborative production infrastructure of uniform services through the federation of national resource providers that supports multi-disciplinary science across Europe and around the world. This presentation will provide an overview of the recently established 'federated cloud computing services' that the National Grid Initiatives (NGIs), operators of EGI, offer to scientific communities. The presentation will explain the technical capabilities of the 'EGI Federated Cloud' and the processes whereby earth and space science researchers can engage with it. EGI's resource centres have been providing services for collaborative, compute- and data-intensive applications for over a decade. Besides the well-established 'grid services', several NGIs already offer privately run cloud services to their national researchers. Many of these researchers recently expressed the need to share these cloud capabilities within their international research collaborations - a model similar to the way the grid emerged through the federation of institutional batch computing and file storage servers. To facilitate the setup of a pan-European cloud service from the NGIs' resources, the EGI-InSPIRE project established a Federated Cloud Task Force in September 2011. The Task Force has a mandate to identify and test technologies for a multinational federated cloud that could be provisioned within EGI by the NGIs. A guiding principle for the EGI Federated Cloud is to remain technology neutral and flexible for both resource providers and users: • Resource providers are allowed to use any cloud hypervisor and management technology to join virtualised resources into the EGI Federated Cloud as long as the site is subscribed to the user-facing interfaces selected by the EGI community. • Users can integrate high level services - such as brokers, portals and customised Virtual Research

  10. A Computer Program for Estimating True-Score Distributions and Graduating Observed-Score Distributions

    ERIC Educational Resources Information Center

    Wingersky, Marilyn S.; and others

    1969-01-01

    One in a series of nine articles in a section entitled, "Electronic Computer Program and Accounting Machine Procedures. Research supported in part by contract Nonr-2752(00) from the Office of Naval Research.

  11. Distributed design tools: Mapping targeted design tools onto a Web-based distributed architecture for high-performance computing

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Poore, C.A.

    1999-11-30

    Design Tools use a Web-based Java interface to guide a product designer through the design-to-analysis cycle for a specific, well-constrained design problem. When these Design Tools are mapped onto a Web-based distributed architecture for high-performance computing, the result is a family of Distributed Design Tools (DDTs). The software components that enable this mapping consist of a Task Sequencer, a generic Script Execution Service, and the storage of both data and metadata in an active, object-oriented database called the Product Database Operator (PDO). The benefits of DDTs include improved security, reliability, scalability (in both problem size and computing hardware), robustness, and reusability. In addition, access to the PDO unlocks its wide range of services for distributed components, such as lookup and launch capability, persistent shared memory for communication between cooperating services, state management, event notification, and archival of design-to-analysis session data.

  12. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  13. Automated CFD Parameter Studies on Distributed Parallel Computers

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Aftosmis, Michael; Pandya, Shishir; Tejnil, Edward; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The objective of the current work is to build a prototype software system which will automated the process of running CFD jobs on Information Power Grid (IPG) resources. This system should remove the need for user monitoring and intervention of every single CFD job. It should enable the use of many different computers to populate a massive run matrix in the shortest time possible. Such a software system has been developed, and is known as the AeroDB script system. The approach taken for the development of AeroDB was to build several discrete modules. These include a database, a job-launcher module, a run-manager module to monitor each individual job, and a web-based user portal for monitoring of the progress of the parameter study. The details of the design of AeroDB are presented in the following section. The following section provides the results of a parameter study which was performed using AeroDB for the analysis of a reusable launch vehicle (RLV). The paper concludes with a section on the lessons learned in this effort, and ideas for future work in this area.

  14. Fast distributed large-pixel-count hologram computation using a GPU cluster.

    PubMed

    Pan, Yuechao; Xu, Xuewu; Liang, Xinan

    2013-09-10

    Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.

  15. Computer program calculates and plots surface area and pore size distribution data

    NASA Technical Reports Server (NTRS)

    Halpert, G.

    1968-01-01

    Computer program calculates surface area and pore size distribution of powders, metals, ceramics, and catalysts, and prints and plots the desired data directly. Surface area calculations are based on the gas adsorption technique of Brunauer, Emmett, and Teller, and pore size distribution calculations are based on the gas adsorption technique of Pierce.

  16. A Hybrid Computer Simulation to Generate the DNA Distribution of a Cell Population.

    ERIC Educational Resources Information Center

    Griebling, John L.; Adams, William S.

    1981-01-01

    Described is a method of simulating the formation of a DNA distribution, on which statistical results and experimentally measured parameters from DNA distribution and percent-labeled mitosis studies are combined. An EAI-680 and DECSystem-10 Hybrid Computer configuration are used. (Author/CS)

  17. Rocket Engine Turbine Blade Surface Pressure Distributions Experiment and Computations

    NASA Technical Reports Server (NTRS)

    Hudson, Susan T.; Zoladz, Thomas F.; Dorney, Daniel J.; Turner, James (Technical Monitor)

    2002-01-01

    Understanding the unsteady aspects of turbine rotor flow fields is critical to successful future turbine designs. A technology program was conducted at NASA's Marshall Space Flight Center to increase the understanding of unsteady environments for rocket engine turbines. The experimental program involved instrumenting turbine rotor blades with miniature surface mounted high frequency response pressure transducers. The turbine model was then tested to measure the unsteady pressures on the rotor blades. The data obtained from the experimental program is unique in two respects. First, much more unsteady data was obtained (several minutes per set point) than has been possible in the past. Also, an extensive steady performance database existed for the turbine model. This allowed an evaluation of the effect of the on-blade instrumentation on the turbine's performance. A three-dimensional unsteady Navier-Stokes analysis was also used to blindly predict the unsteady flow field in the turbine at the design operating conditions and at +15 degrees relative incidence to the first-stage rotor. The predicted time-averaged and unsteady pressure distributions show good agreement with the experimental data. This unique data set, the lessons learned for acquiring this type of data, and the improvements made to the data analysis and prediction tools are contributing significantly to current Space Launch Initiative turbine airflow test and blade surface pressure prediction efforts.

  18. Private Data Analytics on Biomedical Sensing Data via Distributed Computation.

    PubMed

    Gong, Yanmin; Fang, Yuguang; Guo, Yuanxiong

    2016-01-01

    Advances in biomedical sensors and mobile communication technologies have fostered the rapid growth of mobile health (mHealth) applications in the past years. Users generate a high volume of biomedical data during health monitoring, which can be used by the mHealth server for training predictive models for disease diagnosis and treatment. However, the biomedical sensing data raise serious privacy concerns because they reveal sensitive information such as health status and lifestyles of the sensed subjects. This paper proposes and experimentally studies a scheme that keeps the training samples private while enabling accurate construction of predictive models. We specifically consider logistic regression models which are widely used for predicting dichotomous outcomes in healthcare, and decompose the logistic regression problem into small subproblems over two types of distributed sensing data, i.e., horizontally partitioned data and vertically partitioned data. The subproblems are solved using individual private data, and thus mHealth users can keep their private data locally and only upload (encrypted) intermediate results to the mHealth server for model training. Experimental results based on real datasets show that our scheme is highly efficient and scalable to a large number of mHealth users.

  19. Lower bounds on parallel, distributed, and automata computations

    SciTech Connect

    Gereb-Graus, M.

    1989-01-01

    In this thesis the author presents a collection of lower bound results from several areas of computer science. Conventional wisdom states that lower bounds are much more difficult to prove than upper bounds. To get an upper bound one has to demonstrate just one scheme with the appropriate complexity. On the other hand, to prove lower bounds one has to deal with all possible schemes. The difficulty of lower bounds can be further demonstrated by the fact that wherever for some problem he has a very large gap between the lower and the upper bound, the conjecture for the truth usually is the known upper bound. His first two results are impossibility results for finite state automata. A hierarchy of complexity classes on tree languages (analogous to the polynomial hierarchy) accepted by alternating finite state machines is introduced. It turns out that the alternating class is equal to the well known tree language class accepted by the treeautomata. By separating the deterministic and the nondeterministic classes of his hierarchy he gives a negative answer to the folklore question whether the expressive power of the treeautomata is the same as that of the finite state automaton that can walk on the edges of the tree (bugautomaton). He proves that three-head one-way DFA cannot perform string-matching, that is, no three-head one-way DFA accepts the language L = (x{number sign}y {vert bar} x is a substring of y, where x,y {element of} (0,1){sup *}). He proves that in a one round fair coin flipping (or voting) scheme with n participants, there is at least one participant who has a chance to decide the outcome with probability at least 3/n {minus} o(1/n).

  20. Analytical formulae for computing dominance from species-abundance distributions.

    PubMed

    Fung, Tak; Villain, Laura; Chisholm, Ryan A

    2015-12-01

    The evenness of an ecological community affects ecosystem structure, functioning and stability, and has implications for biodiversity conservation. In uneven communities, most species are rare while a few dominant species drive ecosystem-level properties. In even communities, dominance is lower, with possibly many species playing key ecological roles. The dominance aspect of evenness can be measured as a decreasing function of the proportion of species required to make up a fixed fraction (e.g., half) of individuals in a community. Here we sought general rules about dominance in ecological communities by linking dominance mathematically to the parameters of common theoretical species-abundance distributions (SADs). We found that if a community's SAD was log-series or lognormal, then dominance was almost inevitably high, with fewer than 40% of species required to account for 90% of all individuals. Dominance for communities with an exponential SAD was lower but still typically high, with fewer than 40% of species required to account for 70% of all individuals. In contrast, communities with a gamma SAD only exhibited high dominance when the average species abundance was below a threshold of approximately 100. Furthermore, we showed that exact values of dominance were highly scale-dependent, exhibiting non-linear trends with changing average species abundance. We also applied our formulae to SADs derived from a mechanistic community model to demonstrate how dominance can increase with environmental variance. Overall, our study provides a rigorous basis for theoretical explorations of the dynamics of dominance in ecological communities, and how this affects ecosystem functioning and stability. PMID:26409166

  1. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  2. Postbuckling and large-deflection nonlinear analyses on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for postbuckling and nonlinear static analyses of large complex structures on distributed-memory parallel computers. The strategy is designed for message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by performing thermomechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of High Speed Civil Transport models on three distributed-memory computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed-memory machines.

  3. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  4. Evaluation of Near Field Atmospheric Dispersion Around Nuclear Facilities Using a Lorentzian Distribution Methodology

    SciTech Connect

    Hawkley, Gavin

    2014-01-01

    Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.

  5. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  6. Real-time computer data system for the 40- by 80-foot wind tunnel facility at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Tolari, G. P.

    1975-01-01

    The background material and operational concepts of a computer-based system for an operating wind tunnel are described. An on-line real-time computer system was installed in a wind tunnel facility to gather static and dynamic data. The computer system monitored aerodynamic forces and moments of periodic and quasi-periodic functions, and displayed and plotted computed results in real time. The total system is comprised of several off-the-shelf, interconnected subsystems that are linked to a large data processing center. The system includes a central processor unit with 32,000 24-bit words of core memory, a number of standard peripherals, and several special processors; namely, a dynamic analysis subsystem, a 256-channel PCM-data subsystem and ground station, a 60-channel high-speed data acquisition subsystem, a communication link, and static force and pressure subsystems. The role of the test engineer as a vital link in the system is also described.

  7. Potential applications of artificial intelligence in computer-based management systems for mixed waste incinerator facility operation

    SciTech Connect

    Rivera, A.L.; Singh, S.P.N.; Ferrada, J.J.

    1991-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site, designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conversion and Recovery Act (RCRA). Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. This presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. This paper describes mixed waste incinerator facility performance-oriented tasks that could be assisted by Artificial Intelligence (AI) and the requirements for AI tools that would implement these algorithms in a computer-based system. 4 figs., 1 tab.

  8. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  9. A computational method for planning complex compound distributions under container, liquid handler, and assay constraints.

    PubMed

    Russo, Mark F; Wild, Daniel; Hoffman, Steve; Paulson, James; Neil, William; Nirschl, David S

    2013-10-01

    A systematic method for assembling and solving complex compound distribution problems is presented in detail. The method is based on a model problem that enumerates the mathematical equations and constraints describing a source container, liquid handler, and three types of destination containers involved in a set of compound distributions. One source container and one liquid handler are permitted in any given problem formulation, although any number of compound distributions may be specified. The relative importance of all distributions is expressed by assigning weights, which are factored into the final mathematical problem specification. A computer program was created that automatically assembles and solves a complete compound distribution problem given the parameters that describe the source container, liquid handler, and any number and type of compound distributions. Business rules are accommodated by adjusting weighting factors assigned to each distribution. An example problem, presented and explored in detail, demonstrates complex and nonintuitive solution behavior.

  10. Simulation concept of NICA-MPD-SPD Tier0-Tier1 computing facilities

    NASA Astrophysics Data System (ADS)

    Korenkov, V. V.; Nechaevskiy, A. V.; Ososkov, G. A.; Pryahina, D. I.; Trofomov, V. V.; Uzhinskiy, A. V.

    2016-09-01

    The simulation concept for grid-cloud services of contemporary HENP experiments of the Big Data scale was formulated in practicing the simulation system developed in LIT JINR Dubna. This system is intended to improve the efficiency of the design and development of a wide class of grid-cloud structures by using the work quality indicators of some real system to design and predict its evolution. For these purposes the simulation program is combined with a real monitoring system of the grid-cloud service through a special database (DB). The DB accomplishes acquisition and analysis of monitoring data to carry out dynamical corrections of the simulation. Such an approach allows us to construct a general model pattern which should not depend on a specific simulated object, while the parameters describing this object can be used as input to run the pattern. The simulation of some processes of the NICA-MPD-SPD Tier0-Tier1 distributed computing is considered as an example of our approach applications.

  11. Computer code for the calculation of the temperature distribution of cooled turbine blades

    NASA Astrophysics Data System (ADS)

    Tietz, Thomas A.; Koschel, Wolfgang W.

    A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.

  12. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  13. School Facilities Funding and Capital-Outlay Distribution in the States

    ERIC Educational Resources Information Center

    Duncombe, William; Wang, Wen

    2009-01-01

    Traditionally, financing the construction of school facilities has been a local responsibility. In the past several decades, states have increased their support for school facilities. Using data collected from various sources, this study first classifies the design of capital aid programs in all 50 states into various categories based on the scope…

  14. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    SciTech Connect

    Trent, D.S.; Eyler, L.L.

    1982-09-01

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  15. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  16. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing.

    PubMed

    Keane, Thomas M; Naughton, Thomas J; McInerney, James O

    2007-07-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php.

  17. Computation of wall temperature and heat flux distributions of the film cooled walls

    NASA Astrophysics Data System (ADS)

    Ko, S.-Y.

    A computational algorithm and a computer program have been developed for determining the wall temperature distribution of film-cooled gas turbine flame tube. In the computer program, the Newton-Raphson iteration method is used for the solution of heat balance equation; a graphic method has been also proposed for the same purpose. Results indicate that a 1% reduction in the turbulent mixing coefficient of the combustion chamber would reduce the wall temperature by about 20 C, which would substantially increase the service life of turbine components.

  18. Computation of pair distribution functions and three-dimensional densities with a reduced variance principle

    NASA Astrophysics Data System (ADS)

    Borgis, Daniel; Assaraf, Roland; Rotenberg, Benjamin; Vuilleumier, Rodolphe

    2013-12-01

    No fancy statistical objects here, we go back to the computation of one of the most basic and fundamental quantities in the statistical mechanics of fluids, namely the pair distribution functions. Those functions are usually computed in molecular simulations by using histogram techniques. We show here that they can be estimated using a global information on the instantaneous forces acting on the particles, and that this leads to a reduced variance compared to the standard histogram estimators. The technique is extended successfully to the computation of three-dimensional solvent densities around tagged molecular solutes, quantities that are noisy and very long to converge, using histograms.

  19. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  20. Planning for distributed workflows: constraint-based coscheduling of computational jobs and data placement in distributed environments

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2015-05-01

    When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.

  1. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  2. DISTRIBUTION COEFICIENTS (KD) GENERATED FROM A CORE SAMPLE COLLECTED FROM THE SALTSTONE DISPOSAL FACILITY

    SciTech Connect

    Almond, P.; Kaplan, D.

    2011-04-25

    Core samples originating from Vault 4, Cell E of the Saltstone Disposal Facility (SDF) were collected in September of 2008 (Hansen and Crawford 2009, Smith 2008) and sent to SRNL to measure chemical and physical properties of the material including visual uniformity, mineralogy, microstructure, density, porosity, distribution coefficients (K{sub d}), and chemical composition. Some data from these experiments have been reported (Cozzi and Duncan 2010). In this study, leaching experiments were conducted with a single core sample under conditions that are representative of saltstone performance. In separate experiments, reducing and oxidizing environments were targeted to obtain solubility and Kd values from the measurable species identified in the solid and aqueous leachate. This study was designed to provide insight into how readily species immobilized in saltstone will leach from the saltstone under oxidizing conditions simulating the edge of a saltstone monolith and under reducing conditions, targeting conditions within the saltstone monolith. Core samples were taken from saltstone poured in December of 2007 giving a cure time of nine months in the cell and a total of thirty months before leaching experiments began in June 2010. The saltstone from Vault 4, Cell E is comprised of blast furnace slag, class F fly ash, portland cement, and Deliquification, Dissolution, and Adjustment (DDA) Batch 2 salt solution. The salt solution was previously analyzed from a sample of Tank 50 salt solution and characterized in the 4QCY07 Waste Acceptance Criteria (WAC) report (Zeigler and Bibler 2009). Subsequent to Tank 50 analysis, additional solution was added to the tank solution from the Effluent Treatment Project as well as from inleakage from Tank 50 pump bearings (Cozzi and Duncan 2010). Core samples were taken from three locations and at three depths at each location using a two-inch diameter concrete coring bit (1-1, 1-2, 1-3; 2-1, 2-2, 2-3; 3-1, 3-2, 3-3) (Hansen and

  3. Certain irregularities in the use of computer facilities at Sandia Laboratory

    SciTech Connect

    Not Available

    1980-10-22

    This report concerns irregularities in the use of computer systems at Sandia Laboratories (Sandia) in Albuquerque, New Mexico. Our interest in this subject was triggered when we learned late last year that the Federal Bureau of Investigation (FBI) was planning to undertake an investigation into possible misuse of the computer systems at Sandia. That investigation, which was carried out with the assistance of our staff, disclosed that an employee of Sandia was apparently using the Sandia computer system to assist in running a bookmaking operation for local gamblers. As a result of that investigation, we decided to conduct a separate review of Sandia's computer systems to determine the extent of computer misuse at Sandia. We found that over 200 employees of Sandia had stored games, personal items, classified material, and otherwise sensitive material on their computer files.

  4. Computer Graphics for Use in the Classroom to Illustrate Basic Concepts and Spatial Distributions.

    ERIC Educational Resources Information Center

    Smith, Alan D.

    The computer packages of PLOTALL, SYMAP, SURFACE II, QUSMO, QUSMO2, QUCRS, and QUTAB are commercially available plotting programs that provide aids for visualizing spatial distributed data and concepts. The incremental drum and line printer plots communicate often vast and difficult-to-interpret tabular data with or without geographic coordinates.…

  5. Sensitivity analysis for large-deflection and postbuckling responses on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for calculating sensitivity coefficients for the nonlinear large-deflection and postbuckling responses of laminated composite structures on distributed-memory parallel computers. The strategy is applicable to any message-passing distributed computational environment. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a parallel sparse equation solver based on a nested dissection (or multilevel substructuring) node ordering scheme; and (3) a multilevel parallel procedure for evaluating hierarchical sensitivity coefficients. The hierarchical sensitivity coefficients measure the sensitivity of the composite structure response to variations in three sets of interrelated parameters; namely, laminate, layer and micromechanical (fiber, matrix, and interface/interphase) parameters. The effectiveness of the strategy is assessed by performing hierarchical sensitivity analysis for the large-deflection and postbuckling responses of stiffened composite panels with cutouts on three distributed-memory computers. The panels are subjected to combined mechanical and thermal loads. The numerical studies presented demonstrate the advantages of the reduced basis technique for hierarchical sensitivity analysis on distributed-memory machines.

  6. A Survey of Knowledge Management Skills Acquisition in an Online Team-Based Distributed Computing Course

    ERIC Educational Resources Information Center

    Thomas, Jennifer D. E.

    2007-01-01

    This paper investigates students' perceptions of their acquisition of knowledge management skills, namely thinking and team-building skills, resulting from the integration of various resources and technologies into an entirely team-based, online upper level distributed computing (DC) information systems (IS) course. Results seem to indicate that…

  7. Variable-Length Message Transmission for Distributed Loop Computer Networks (Part I).

    ERIC Educational Resources Information Center

    Reames, C. C.; Liu, M. T.

    An introduction to the problems of variable-length message transmission in distributed loop computer networks, with a summary of previous accomplishments in the area, begins this technically-oriented document. An improved technique, overcoming some of the inadequacies in presently used techniques, is proposed together with a conceptual model of…

  8. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  9. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  10. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  11. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  12. Effect of computed horizontal diffusion coefficients on two-dimensional N2O model distributions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Guthrie, Paul D.; Schoeberl, Mark R.; Newman, Paul A.

    1988-01-01

    The effects of horizontal diffusion coefficients K(yy) and K(yz), computed directly from the residual circulation, on the N2O distribution in a photochemical model were investigated, using a modified version of the two-dimensional model of Guthrie et al. (1984). The residual circulation was computed using the NMC's temperature data and the heating rates reported by Rosenfield et al. (1987). As compared with the effect of the residual circulation alone, the use of horizontal diffusion coefficients produced substantial changes in the N2O distribution and increased the N2O's lifetime values by a few percent. It is suggested that trace gases, such as CH4, CFCl3, CF2Cl2, CH3Cl, and CCl4, which impact the NO(x), HO(x), and Cl(x) radical distributions and therefore ozone, will be influenced in a similar manner by the addition of more realistic diffusion fields.

  13. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  14. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    SciTech Connect

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  15. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    NASA Astrophysics Data System (ADS)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  16. An incentive for coordination in a decentralised service chain with a Weibull lifetime distributed facility

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Fang; Yang, Gino K.; Yang, Chyn-Yng; Chu, Tu-Bin

    2013-10-01

    This article deals with a decentralised service chain consisting of a service provider and a facility owner. The revenue allocation and service price are, respectively, determined by the service provider and the facility owner in a non-cooperative manner. To model this decentralised operation, a Stackelberg game between the two parties is formulated. In the mathematical framework, the service system is assumed to be driven by Poisson customer arrivals and exponential service times. The most common log-linear service demand and Weibull facility lifetime are also adopted. Under these analytical conditions, the decentralised decisions in this game are investigated and then a unique optimal equilibrium is derived. Finally, a coordination mechanism is proposed to improve the efficiency of this decentralised system.

  17. Environmental justice in a French industrial region: are polluting industrial facilities equally distributed?

    PubMed

    Viel, Jean-François; Hägi, Mathieu; Upegui, Erika; Laurian, Lucie

    2011-01-01

    Recent studies have suggested that minority or deprived groups are subject to the additional burden of a polluted living environment. Our goal is to determine whether such environmental inequalities occur in France's leading industrial region, using detailed socio-economic data and advanced Bayesian methods. Associations between proximity to hazardous facilities (i.e., within a 2 km radius) and the socio-economic characteristics of populations are analyzed at fine geographical scales. Noxious facilities are disproportionately located in higher foreign-born communities after controlling for deprivation (Townsend score), population density and rural/urban status. High deprivation also appears as a predictive factor, although less strongly and less consistently.

  18. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research

    PubMed Central

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400

  19. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.

    PubMed

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.

  20. Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites

    NASA Astrophysics Data System (ADS)

    2002-03-01

    Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.

  1. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  2. Slug flow experiments and computer simulation of slug length distribution in vertical pipes

    SciTech Connect

    Hasanein, H.A.; Tudose, G.T.; Wong, S.; Malik, M.; Esaki, S.; Kawaji, M.

    1996-12-31

    Experiments on air-kerosene developing slug flow in a vertical pipe were conducted and the slug length distribution in vertical pipes was also simulated using a computer code. The aim of the experiments was to investigate the effect of flow parameters on the bubble rise velocity. The rise velocity of the trailing bubble was found to increase with its size while the size of the leading bubble was found to affect its wake length, and therefore, the point at which the trailing bubble starts to accelerate. The bubble rise velocity data were correlated in terms of the separation distance between the leading and the trailing bubbles. A computer model was developed to simulate the slug length distribution in vertical pipes. The model was based on the bubble overtaking mechanisms in which the trailing bubbles would catch up and merge with the leading bubbles. The computer model was able to predict reasonably accurately the liquid slug length distributions at the outlets given a specified inlet liquid slug length distributions.

  3. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  4. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  5. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1997-12-31

    Lilith is a general purpose tool that provides a highly scalable, easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. This speed-up in development not only enables the easy creation of tools as needed but also facilitates the ultimate development of more refined, hard-coded tools as well. Lilith is written in Java, providing platform independence and further facilitating rapid tool development through Object reuse and ease of development. The authors present the user-involved objects in the Lilith Distributed Object System and the Lilith User API. They present an example of tool development, illustrating the user calls, and present results demonstrating Lilith`s scalability.

  6. Computation of the temperature distribution in cooled radial inflow turbine guide vanes

    NASA Technical Reports Server (NTRS)

    Tabakoff, W.; Hosny, W.; Hamed, A.

    1977-01-01

    A two-dimensional finite-difference numerical technique is presented to determine the temperature distribution of an internally-cooled blade of radial turbine guide vanes. A simple convection cooling is assumed inside the guide vane. Such an arrangement results in relatively small cooling effectiveness at the leading edge and at the trailing edge. Heat transfer augmentation in these critical areas may be achieved by using impingement jets and film cooling. A computer program is written in Fortran IV for IBM 370/165 computer.

  7. Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Cheatwood, F. McNeil

    1997-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.

  8. The maintenance, distribution and development of biomedical computer software: an exercise in software engineering.

    PubMed

    Boston, R C; Granek, H; Sutton, N; Weber, K; Greif, P; Zech, L

    1986-06-01

    The growing reliance of biomedical investigations on computer software in almost all facets of their work places considerable emphasis on the need for the integrated management of the software. In order to efficiently develop, distribute, and maintain the software, tools are required which not only automate these tasks but also, wherever possible, 'semi-intelligently', alert their user to irregular situation. We describe an assortment of such tools routinely used in the management of the SAAM/CONSAM biokinetic software and illustrate their application. Furthermore, using these techniques we have presented some comparative performances of numerical integrators and of computer processors.

  9. An optimized pre-moderator improves uniformity of activation rate distribution in an ORNL phantom-IVNAA facility

    NASA Astrophysics Data System (ADS)

    Ebrahimi-Khankook, Atiyeh; Rafat-Motavalli, Laleh; Miri-Hakimabad, Seyyed Hashem

    2013-04-01

    Uniformity of activation rate distribution through the human body is extremely important for in vivo analysis of the body elements by neutron activation method. Achieving uniformity can be difficult because of the non-homogenous body shape and compositions. Pre-moderator is one of the most essential parts of the irradiation facility to provide uniform distribution over the sample. The aim of the present study was designation of an optimum pre-moderator, in terms of shape and material, which compensates the destructive effects of body shape and allows a satisfactory uniformity of activation rate in the sample. Our final calculations indicated that using two slabs of paraffin with a thickness of 1.8 cm as a pre-moderator in the presence of a reflector/moderator, achieve the most uniform distribution of activation rate in the body.

  10. Effects of wind-energy facilities on breeding grassland bird distributions.

    PubMed

    Shaffer, Jill A; Buhl, Deborah A

    2016-02-01

    The contribution of renewable energy to meet worldwide demand continues to grow. Wind energy is one of the fastest growing renewable sectors, but new wind facilities are often placed in prime wildlife habitat. Long-term studies that incorporate a rigorous statistical design to evaluate the effects of wind facilities on wildlife are rare. We conducted a before-after-control-impact (BACI) assessment to determine if wind facilities placed in native mixed-grass prairies displaced breeding grassland birds. During 2003-2012, we monitored changes in bird density in 3 study areas in North Dakota and South Dakota (U.S.A.). We examined whether displacement or attraction occurred 1 year after construction (immediate effect) and the average displacement or attraction 2-5 years after construction (delayed effect). We tested for these effects overall and within distance bands of 100, 200, 300, and >300 m from turbines. We observed displacement for 7 of 9 species. One species was unaffected by wind facilities and one species exhibited attraction. Displacement and attraction generally occurred within 100 m and often extended up to 300 m. In a few instances, displacement extended beyond 300 m. Displacement and attraction occurred 1 year after construction and persisted at least 5 years. Our research provides a framework for applying a BACI design to displacement studies and highlights the erroneous conclusions that can be made without the benefit of adopting such a design. More broadly, species-specific behaviors can be used to inform management decisions about turbine placement and the potential impact to individual species. Additionally, the avoidance distance metrics we estimated can facilitate future development of models evaluating impacts of wind facilities under differing land-use scenarios.

  11. Effects of wind-energy facilities on breeding grassland bird distributions.

    PubMed

    Shaffer, Jill A; Buhl, Deborah A

    2016-02-01

    The contribution of renewable energy to meet worldwide demand continues to grow. Wind energy is one of the fastest growing renewable sectors, but new wind facilities are often placed in prime wildlife habitat. Long-term studies that incorporate a rigorous statistical design to evaluate the effects of wind facilities on wildlife are rare. We conducted a before-after-control-impact (BACI) assessment to determine if wind facilities placed in native mixed-grass prairies displaced breeding grassland birds. During 2003-2012, we monitored changes in bird density in 3 study areas in North Dakota and South Dakota (U.S.A.). We examined whether displacement or attraction occurred 1 year after construction (immediate effect) and the average displacement or attraction 2-5 years after construction (delayed effect). We tested for these effects overall and within distance bands of 100, 200, 300, and >300 m from turbines. We observed displacement for 7 of 9 species. One species was unaffected by wind facilities and one species exhibited attraction. Displacement and attraction generally occurred within 100 m and often extended up to 300 m. In a few instances, displacement extended beyond 300 m. Displacement and attraction occurred 1 year after construction and persisted at least 5 years. Our research provides a framework for applying a BACI design to displacement studies and highlights the erroneous conclusions that can be made without the benefit of adopting such a design. More broadly, species-specific behaviors can be used to inform management decisions about turbine placement and the potential impact to individual species. Additionally, the avoidance distance metrics we estimated can facilitate future development of models evaluating impacts of wind facilities under differing land-use scenarios. PMID:26213098

  12. Laser performance operations model (LPOM): The computational system that automates the setup and performance analysis of the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Shaw, Michael; House, Ronald

    2015-02-01

    The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF is the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed that automates the laser setup process, and accurately predict laser energetics. LPOM uses diagnostic feedback from previous NIF shots to maintain accurate energetics models (gains and losses), as well as links to operational databases to provide `as currently installed' optical layouts for each of the 192 NIF beamlines. LPOM deploys a fully integrated laser physics model, the Virtual Beamline (VBL), in its predictive calculations in order to meet the accuracy requirements of NIF experiments, and to provide the ability to determine the damage risk to optical elements throughout the laser chain. LPOM determines the settings of the injection laser system required to achieve the desired laser output, provides equipment protection, and determines the diagnostic setup. Additionally, LPOM provides real-time post shot data analysis and reporting for each NIF shot. The LPOM computation system is designed as a multi-host computational cluster (with 200 compute nodes, providing the capability to run full NIF simulations fully parallel) to meet the demands of both the controls systems within a shot cycle, and the NIF user community outside of a shot cycle.

  13. Lilith: A Java framework for the development of scalable tools for high performance distributed computing platforms

    SciTech Connect

    Evensky, D.A.; Gentile, A.C.; Armstrong, R.C.

    1998-03-19

    Increasingly, high performance computing constitutes the use of very large heterogeneous clusters of machines. The use and maintenance of such clusters are subject to complexities of communication between the machines in a time efficient and secure manner. Lilith is a general purpose tool that provides a highly scalable, secure, and easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. Lilith is written in Java, taking advantage of Java`s unique features of loading and distributing code dynamically, its platform independence, its thread support, and its provision of graphical components to facilitate easy to use resultant tools. The authors describe the use of Lilith in a tool developed for the maintenance of the large distributed cluster at their institution and present details of the Lilith architecture and user API for the general user development of scalable tools.

  14. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    SciTech Connect

    Shin, J; Coss, D; McMurry, J; Farr, J; Faddegon, B

    2014-06-01

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1, 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.

  15. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  16. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  17. PUMMA: Parallel Universal Matrix Multiplication Algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, Jaeyoung; Walker, D.W.; Dongarra, J.J. |

    1993-08-01

    This paper describes the Parallel Universal Matrix Multiplication Algorithms (PUMMA) on distributed memory concurrent computers. The PUMMA package includes not only the non-transposed matrix multiplication routine C = A{center_dot}B, but also transposed multiplication routines C = A{sup T}{center_dot}B, C = A{center_dot}B{sup T}, and C = A{sup T}{center_dot}B{sup T}, for a block scattered data distribution. The routines perform efficiently for a wide range of processor configurations and block sizes. The PUMMA together provide the same functionality as the Level 3 BLAS routine xGEMM. Details of the parallel implementation of the routines are given, and results are presented for runs on the Intel Touchstone Delta computer.

  18. Detailed computational procedure for design of cascade blades with prescribed velocity distributions in compressible potential flows

    NASA Technical Reports Server (NTRS)

    Costello, George R; Cummings, Robert L; Sinnette, John T , Jr

    1952-01-01

    A detailed step-by-step computational outline is presented for the design of two-dimensional cascade blades having a prescribed velocity distribution on the blade in a potential flow of the usual compressible fluid. The outline is based on the assumption that the magnitude of the velocity in the flow of the usual compressible nonviscous fluid is proportional to the magnitude of the velocity in the flow of a compressible nonviscous fluid with linear pressure-volume relation.

  19. The design of a standard message passing interface for distributed memory concurrent computers

    SciTech Connect

    Walker, D.W.

    1993-10-01

    This paper presents an overview of MPI, a proposed standard message passing interface for MIMD distributed memory concurrent computers. The design of MPI has been a collective effort involving researchers in the United States and Europe from many organizations and institutions. MPI includes point-to-point and collective communication routines, as well as support for process groups, communication contexts, and application topologies. While making use of new ideas where appropriate, the MPI standard is based largely on current practice.

  20. Vibration suppression with approximate finite dimensional compensators for distributed systems: Computational methods and experimental results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.

  1. Sources and Distribution of Polychlorinated Terphenyls at a Major US Aeronautics Research Facility.

    PubMed

    HALE; ENOS; GALLAGHER

    1998-11-01

    / High concentrations of an unusual, complex mixture of chlorinated compounds were discovered in sediments and oysters near a federal aeronautics facility during implementation of a pollutant screening protocol. The mixture was identified as Aroclor 5432, a polychlorinated terphenyl (PCT) formulation, produced in the US until 1972. PCTs, particularly low chlorinated mixtures, have rarely been reported in the environment, despite significant manufacture and usage. Releases were traced to two outfalls. Creek sediments downstream of one contained concentrations as high as 200,000 |gmg/kg (dry weight basis); those in indigenous oysters reached 35,000 |gmg/kg, indicating significant bioavailability and bioaccumulation potential. Subsequent work showed that PCTs were widely disseminated in marsh grass, crabs, and fish. PCTs, PCBs, and mercury were also detected in storm drain lines entering these outfalls. The lines received input from both storm water and research buildings. Historical hydraulic fluid leaks and in-service compressor fluids in some buildings contained PCTs and PCBs. Contaminated materials on-site were removed to minimize pollutant spread. Aroclor 5432 usage, most likely as compressor/hydraulic fluid additives, probably ended about ten years prior to its on-site detection. In terms of biological effects, intraperitoneal injection of fish with Aroclor 5432 induced cytochrome P-4501A (CYP1A) and ethoxyresorufin O-deethylase (EROD) activity to a similar degree as PCB Aroclor 1254 and to a greater extent than PCT Aroclor 5460. The presence of high concentrations of PCTs contributed to the facility being included on the National Priorities List. It subsequently became the first US federal facility to sign a Federal Facility Agreement, identifying cleanup responsibilities, prior to formal listing.KEY WORDS: Polychlorinated terphenyls; Aroclor; Contaminated sediments; Hydraulic fluid; Enzyme induction; Polychlorinated biphenyls

  2. The influence of facility agriculture production on phthalate esters distribution in black soils of northeast China.

    PubMed

    Zhang, Ying; Wang, Pengjie; Wang, Lei; Sun, Guoqiang; Zhao, Jiaying; Zhang, Hui; Du, Na

    2015-02-15

    The current study investigates the existence of 15 phthalate esters (PAEs) in surface soils (27 samples) collected from 9 different facility agriculture sites in the black soil region of northeast China, during the process of agricultural production (comprising only three seasons spring, summer and autumn). Concentrations of the 15 PAEs detected significantly varied from spring to autumn and their values ranged from 1.37 to 4.90 mg/kg-dw, with a median value of 2.83 mg/kg-dw. The highest concentration of the 15 PAEs (4.90 mg/kg-dw) was determined in summer when mulching film was used in the greenhouses. Probably an increase in environmental temperature was a major reason for PAE transfer from the mulching film into the soil and coupled with the increased usage of chemical fertilizers in greenhouses. Results showed that of the 15 PAEs, di(2-ethylhexyl) phthalate(DEHP), di-n-butyl phthalate (DBP), diethyl phthalate (DEP) and dimethyl phthalate (DMP) were in abundance with the mean value of 1.12 ± 0.22, 0.46 ± 0.05, 0.36 ± 0.04, and 0.17 ± 0.01 mg/kg-dw, respectively; and their average contributions in spring, summer, and autumn ranged between 64.08 and 90.51% among the 15 PAEs. The results of Principal Component Analysis (PCA) indicated the concentration of these four main PAEs significantly differed among the facility agricultures investigated, during the process of agricultural production. In comparison with foreign and domestic results of previous researches, it is proved that the black soils of facility agriculture in northeast China show higher pollution situation comparing with non-facility agriculture soils.

  3. Distributed, layered and reliable computing nets to represent neuronal receptive fields.

    PubMed

    Moreno-Diaz, Arminda; de Blasio, Gabriel; Moreno-Diaz, Roberto

    2014-04-01

    Receptive fields of retinal and other sensory neurons show a large variety of spatiotemporal linear and non linear types of responses to local stimuli. In visual neurons, these responses present either asymmetric sensitive zones or center-surround organization. In most cases, the nature of the responses suggests the existence of a kind of distributed computation prior to the integration by the final cell which is evidently supported by the anatomy. We describe a new kind of discrete and continuous filters to model the kind of computations taking place in the receptive fields of retinal cells. To show their performance in the analysis of different non-trivial neuron-like structures, we use a computer tool specifically programmed by the authors to that effect. This tool is also extended to study the effect of lesions on the whole performance of our model nets.

  4. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  5. A Role for Synaptic Input Distribution in a Dendritic Computation of Motion Direction in the Retina.

    PubMed

    Vlasits, Anna L; Morrie, Ryan D; Tran-Van-Minh, Alexandra; Bleckert, Adam; Gainer, Christian F; DiGregorio, David A; Feller, Marla B

    2016-03-16

    The starburst amacrine cell in the mouse retina presents an opportunity to examine the precise role of sensory input location on neuronal computations. Using visual receptive field mapping, glutamate uncaging, two-photon Ca(2+) imaging, and genetic labeling of putative synapses, we identify a unique arrangement of excitatory inputs and neurotransmitter release sites on starburst amacrine cell dendrites: the excitatory input distribution is skewed away from the release sites. By comparing computational simulations with Ca(2+) transients recorded near release sites, we show that this anatomical arrangement of inputs and outputs supports a dendritic mechanism for computing motion direction. Direction-selective Ca(2+) transients persist in the presence of a GABA-A receptor antagonist, though the directional tuning is reduced. These results indicate a synergistic interaction between dendritic and circuit mechanisms for generating direction selectivity in the starburst amacrine cell. PMID:26985724

  6. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  7. A Role for Synaptic Input Distribution in a Dendritic Computation of Motion Direction in the Retina.

    PubMed

    Vlasits, Anna L; Morrie, Ryan D; Tran-Van-Minh, Alexandra; Bleckert, Adam; Gainer, Christian F; DiGregorio, David A; Feller, Marla B

    2016-03-16

    The starburst amacrine cell in the mouse retina presents an opportunity to examine the precise role of sensory input location on neuronal computations. Using visual receptive field mapping, glutamate uncaging, two-photon Ca(2+) imaging, and genetic labeling of putative synapses, we identify a unique arrangement of excitatory inputs and neurotransmitter release sites on starburst amacrine cell dendrites: the excitatory input distribution is skewed away from the release sites. By comparing computational simulations with Ca(2+) transients recorded near release sites, we show that this anatomical arrangement of inputs and outputs supports a dendritic mechanism for computing motion direction. Direction-selective Ca(2+) transients persist in the presence of a GABA-A receptor antagonist, though the directional tuning is reduced. These results indicate a synergistic interaction between dendritic and circuit mechanisms for generating direction selectivity in the starburst amacrine cell.

  8. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  9. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  10. Population-based learning of load balancing policies for a distributed computer system

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; Wah, Benjamin W.

    1993-01-01

    Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

  11. A visualization tool for parallel and distributed computing using the Lilith framework

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Wyckoff, P.

    1998-05-01

    The authors present a visualization tool for the monitoring and debugging of codes run in a parallel and distributed computing environment, called Lilith Lights. This tool can be used both for debugging parallel codes as well as for resource management of clusters. It was developed under Lilith, a framework for creating scalable software tools for distributed computing. The use of Lilith provides scalable, non-invasive debugging, as opposed to other commonly used software debugging and visualization tools. Furthermore, by implementing the visualization tool in software rather than in hardware (as available on some MPPs), Lilith Lights is easily transferable to other machines, and well adapted for use on distributed clusters of machines. The information provided in a clustered environment can further be used for resource management of the cluster. In this paper, they introduce Lilith Lights, discussing its use on the Computational Plant cluster at Sandia National Laboratories, show its design and development under the Lilith framework, and present metrics for resource use and performance.

  12. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  13. Computer Education in Schools: The Distribution Model and the Integration Model in the Federal Republic of Germany.

    ERIC Educational Resources Information Center

    Frey, Karl

    This paper discusses two conflicting opinions on the role of computer education within the West German school curriculum, i.e., the opinion of the majority of the education ministers and administrators, who wish to see computer use distributed over as many school subjects as possible, and a minority of specialists in computer education who prefer…

  14. Advancing a distributed multi-scale computing framework for large-scale high-throughput discovery in materials science.

    PubMed

    Knap, J; Spear, C E; Borodin, O; Leiter, K W

    2015-10-30

    We describe the development of a large-scale high-throughput application for discovery in materials science. Our point of departure is a computational framework for distributed multi-scale computation. We augment the original framework with a specialized module whose role is to route evaluation requests needed by the high-throughput application to a collection of available computational resources. We evaluate the feasibility and performance of the resulting high-throughput computational framework by carrying out a high-throughput study of battery solvents. Our results indicate that distributed multi-scale computing, by virtue of its adaptive nature, is particularly well-suited for building high-throughput applications.

  15. Contaminant distributions at typical U.S. uranium milling facilities and their effect on remedial action decisions

    SciTech Connect

    Hamp, S.; Jackson, T.J.; Dotson, P.W.

    1995-03-01

    Past operations at uranium processing sites throughout the US have resulted in local contamination of soils and ground water by radionuclides, toxic metals, or both. Understanding the origin of contamination and how the constituents are distributed is a basic element for planning remedial action decisions. This report describes the radiological and nonradiological species found in ground water at a typical US uranium milling facility. The report will provide the audience with an understanding of the vast spectrum of contaminants that must be controlled in planning solutions to the long-term management of these waste materials.

  16. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic

  17. Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator

    NASA Technical Reports Server (NTRS)

    Bents, D. J.

    1982-01-01

    A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.

  18. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1998-03-01

    Lilith is a general purpose framework, written in Java, that provides a highly scalable distribution of user code across a heterogeneous computing platform. By creation of suitable user code, the Lilith framework can be used for tool development. The scalable performance provided by Lilith is crucial to the development of effective tools for large distributed systems. Furthermore, since Lilith handles the details of code distribution and communication, the user code need focus primarily on the tool functionality, thus, greatly decreasing the time required for tool development. In this paper, the authors concentrate on the use of the Lilith framework to develop scalable tools. The authors review the functionality of Lilith and introduce a typical tool capitalizing on the features of the framework. They present new Objects directly involved with tool creation. They explain details of development and illustrate with an example. They present timing results demonstrating scalability.

  19. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    SciTech Connect

    Gallarno, George; Rogers, James H; Maxwell, Don E

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  20. Computer simulated building energy consumption for verification of energy conservation measures in network facilities

    NASA Technical Reports Server (NTRS)

    Plankey, B.

    1981-01-01

    A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.

  1. Linking and Combining Distributed Operations Facilities using NASA's "GMSEC" Systems Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Danford; Grubb, Thomas; Esper, Jaime

    2008-01-01

    NASA's Goddard Mission Services Evolution Center (GMSEC) ground system architecture has been in development since late 2001, has successfully supported eight orbiting satellites and is being applied to many of NASA's future missions. GMSEC can be considered an event-driven service-oriented architecture built around a publish/subscribe message bus middleware. This paper briefly discusses the GMSEC technical approaches which have led to significant cost savings and risk reduction for NASA missions operated at the Goddard Space Flight Center (GSFC). The paper then focuses on the development and operational impacts of extending the architecture across multiple mission operations facilities.

  2. Distribution Coefficients (Kd Values) for Waste Resins Generated from the K and L Disassembly Basin Facilities

    SciTech Connect

    Kaplan, D.I.

    2002-12-02

    The objective of this study was to measure 14C, 129I, and 99Tc Kd values of spent resin generated from the K and L Disassembly Basin Facilities. The scope of the work was to conduct Kd measurements of resins combined in the ratio that they are disposed, 42:58 cation:anion. Because it was not known how these spent resins would be buried, it was necessary to measure the Kd values in such a manner as to simulate both trench and vault disposal. This was accomplished by using an acid-rain simulant (a standard U.S. Environmental Protection Agency protocol) and a cement leachate simulant .

  3. Determining collagen distribution in articular cartilage using contrast-enhanced micro-computed tomography

    PubMed Central

    Nieminen, H.J.; Ylitalo, T.; Karhula, S.; Suuronen, J.-P.; Kauppinen, S.; Serimaa, R.; Hæggström, E.; Pritzker, K.P.H.; Valkealahti, M.; Lehenkari, P.; Finnilä, M.; Saarakkala, S.

    2015-01-01

    Summary Objective Collagen distribution within articular cartilage (AC) is typically evaluated from histological sections, e.g., using collagen staining and light microscopy (LM). Unfortunately, all techniques based on histological sections are time-consuming, destructive, and without extraordinary effort, limited to two dimensions. This study investigates whether phosphotungstic acid (PTA) and phosphomolybdic acid (PMA), two collagen-specific markers and X-ray absorbers, could (1) produce contrast for AC X-ray imaging or (2) be used to detect collagen distribution within AC. Method We labeled equine AC samples with PTA or PMA and imaged them with micro-computed tomography (micro-CT) at pre-defined time points 0, 18, 36, 54, 72, 90, 180, 270 h during staining. The micro-CT image intensity was compared with collagen distributions obtained with a reference technique, i.e., Fourier-transform infrared imaging (FTIRI). The labeling time and contrast agent producing highest association (Pearson correlation, Bland–Altman analysis) between FTIRI collagen distribution and micro-CT -determined PTA distribution was selected for human AC. Results Both, PTA and PMA labeling permitted visualization of AC features using micro-CT in non-calcified cartilage. After labeling the samples for 36 h in PTA, the spatial distribution of X-ray attenuation correlated highly with the collagen distribution determined by FTIRI in both equine (mean ± S.D. of the Pearson correlation coefficients, r = 0.96 ± 0.03, n = 12) and human AC (r = 0.82 ± 0.15, n = 4). Conclusions PTA-induced X-ray attenuation is a potential marker for non-destructive detection of AC collagen distributions in 3D. This approach opens new possibilities in development of non-destructive 3D histopathological techniques for characterization of OA. PMID:26003951

  4. Distributed Feedback Fiber Laser The Heart of the National Ignition Facility

    SciTech Connect

    Browning, D F; Erbert, G V

    2003-12-01

    The National Ignition Facility (NIF) is a world-class laser fusion machine that is currently under construction at Lawrence Livermore National Laboratory (LLNL). The 192 laser beams that converge on the target at the output of the NIF laser system originate from a low power fiber laser in the Master Oscillator Room (MOR). The MOR is responsible for generating the single pulse that seeds the entire NIF laser system. This single pulse is phase-modulated to add bandwidth, and then amplified and split into 48 separate beam lines all in single-mode polarizing fiber. Before leaving the MOR, each of the 48 output beams are temporally sculpted into high contrast shapes using Arbitrary Waveform Generators. The 48 output beams of the MOR are amplified in the Preamplifier Modules (PAMs), split and amplified again to generate 192 laser beams. The 192 laser beams are frequency converted to the third harmonic and then focused at the center of a 10-meter diameter target chamber. The MOR is an all fiber-based system utilizing highly reliable Telecom-Industry type hardware. The nearly 2,000,000 joules of energy at the output of the NIF laser system starts from a single fiber oscillator that fits in the palm of your hand. This paper describes the design and performance of the laser source that provides the precision light to the National Ignition Facility. Shown below is a simplified diagram illustrating the MOR's basic functions.

  5. Determination and distribution of diesel components in igneous rock surrounding underground diesel storage facilities in Sweden.

    PubMed

    Loren, A; Hallbeck, L; Pedersen, K; Abrahamsson, K

    2001-01-15

    In Sweden, a preliminary investigation of the contamination situation of igneous rock surrounding underground storage facilities of diesel showed that the situation was severe. The diesel was believed to have penetrated into the rock as far as 50 m from the walls of the vaults. Consequently, the risk for contamination of groundwater and recipients could not be neglected. To be able to assess the fate of diesel components in rock, both a suitable drilling method and a method for the determination of a wide range of diesel components were needed. The analytical method presented made it possible to quantify a number of hydrocarbons in rock samples collected with triple-tube core drilling. The samples were dissolved in hydrofluoric acid (HF) with hexane in Teflon centrifuge tubes. After digestion of the rock, extraction of the analytes with hexane was performed. Determination of the individual hydrocarbons present was done with gas chromatography-mass spectrometry (GC-MS). The method was used to study the environmental impact of the underground storage of diesel. The drilling method enabled sampling without contamination risks. Our data show that the major transport of diesel components in rock occurs through fracture systems and that diffusion of diesel through the rock is of minor importance. The results have drastically changed the view of the contamination situation of diesel in the vicinity of storage facilities in hard rock in Sweden.

  6. Measurement and simulation of lineal energy distribution at the CERN high energy facility with a tissue equivalent proportional counter.

    PubMed

    Rollet, S; Autischer, M; Beck, P; Latocha, M

    2007-01-01

    The response of a tissue equivalent proportional counter (TEPC) in a mixed radiation field with a neutron energy distribution similar to the radiation field at commercial flight altitudes has been studied. The measurements have been done at the CERN-EU High-Energy Reference Field (CERF) facility where a well-characterised radiation field is available for intercomparison. The TEPC instrument used by the ARC Seibersdorf Research is filled with pure propane gas at low pressure and can be used to determine the lineal energy distribution of the energy deposition in a mass of gas equivalent to a 2 microm diameter volume of unit density tissue, of similar size to the nuclei of biological cells. The linearity of the detector response was checked both in term of dose and dose rate. The effect of dead-time has been corrected. The influence of the detector exposure location and orientation in the radiation field on the dose distribution was also studied as a function of the total dose. The microdosimetric distribution of the absorbed dose as a function of the lineal energy has been obtained and compared with the same distribution simulated with the FLUKA Monte Carlo transport code. The dose equivalent was calculated by folding this distribution with the quality factor as a function of linear energy transfer. The comparison between the measured and simulated distributions show that they are in good agreement. As a result of this study the detector is well characterised, thanks also to the numerical simulations the instrument response is well understood, and it's currently being used onboard the aircrafts to evaluate the dose to aircraft crew caused by cosmic radiation.

  7. Comparative phyloinformatics of virus genes at micro and macro levels in a distributed computing environment

    PubMed Central

    Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo

    2008-01-01

    Background Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed – a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. Results In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. Conclusion The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the

  8. Can nursing facility staff with minimal education be successfully trained with computer-based training?

    PubMed

    Walker, Bonnie L; Harrington, Susan S

    2004-05-01

    This study compares the effects of computer-based and instructor-led training on long-term care staff with a high school education or less on fire safety knowledge, attitudes, and practices. Findings show that both methods of instruction were effective in increasing staff tests scores from pre- to posttest. Scores of both groups were lower at follow-up three months later but continued to be higher than at pretest. Staff with a high school education increased scores more than those without a high school diploma.

  9. Peach: a simple Perl-based system for distributed computation and its application to cryo-EM data processing.

    PubMed

    Leong, Peter A; Heymann, J Bernard; Jensen, Grant J

    2005-04-01

    A simple distributed processing system named "Peach" was developed to meet the rising computational demands of modern structural biology (and other) laboratories without additional expense by using existing hardware resources more efficiently. A central server distributes jobs to idle workstations in such a way that each computer is used maximally, but without disturbing intermittent interactive users. As compared to other distributed systems, Peach is simple, easy to install, easy to administer, easy to use, scalable, and robust. While it was designed to queue and distribute large numbers of small tasks to participating computers, it can also be used to send single jobs automatically to the fastest currently available computer and/or survey the activity of an entire laboratory's computers. Tests of robustness and scalability are reported, as are three specific electron cryomicroscopy applications where Peach enabled projects that would not otherwise have been feasible without an expensive, dedicated cluster.

  10. Enabling 3D-Liver Perfusion Mapping from MR-DCE Imaging Using Distributed Computing.

    PubMed

    Leporq, Benjamin; Camarasu-Pop, Sorina; Davila-Serrano, Eduardo E; Pilleul, Frank; Beuf, Olivier

    2013-01-01

    An MR acquisition protocol and a processing method using distributed computing on the European Grid Infrastructure (EGI) to allow 3D liver perfusion parametric mapping after Magnetic Resonance Dynamic Contrast Enhanced (MR-DCE) imaging are presented. Seven patients (one healthy control and six with chronic liver diseases) were prospectively enrolled after liver biopsy. MR-dynamic acquisition was continuously performed in free-breathing during two minutes after simultaneous intravascular contrast agent (MS-325 blood pool agent) injection. Hepatic capillary system was modeled by a 3-parameters one-compartment pharmacokinetic model. The processing step was parallelized and executed on the EGI. It was modeled and implemented as a grid workflow using the Gwendia language and the MOTEUR workflow engine. Results showed good reproducibility in repeated processing on the grid. The results obtained from the grid were well correlated with ROI-based reference method ran locally on a personal computer. The speed-up range was 71 to 242 with an average value of 126. In conclusion, distributed computing applied to perfusion mapping brings significant speed-up to quantification step to be used for further clinical studies in a research context. Accuracy would be improved with higher image SNR accessible on the latest 3T MR systems available today.

  11. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  12. Computer simulation of PPF distribution under blue and red LED light source for plant growth.

    PubMed

    Takita, S; Okamoto, K; Yanagi, T

    1996-12-01

    The superimposed pattern of "luminescence spectrum of blue light emitting diode (LED)" and "that of red LED", corresponds well to light absorption spectrum of chlorophyll. If these two kinds of LED are used as a light source, various plant cultivation experiments are possible. The cultivation experiments which use such light sources are becoming increasingly active, and in such experiments, it is very important to know the distribution of the photosynthetic photon flux (PPF) which exerts an important influence on photosynthesis. Therefore, we have developed a computer simulation system which can visualize the PPF distribution under a light source equipped with blue and red LEDs. In this system, an LED is assumed to be a point light source, and only the photons which are emitted directly from LED are considered. This simulation system can display a perspective view of the PPF distribution, a transverse and a longitudinal section of the distribution, and a contour map of the distribution. Moreover, a contour map of the ratio of the value of the PPF emitted by blue LEDs to that by blue and red LEDs can be displayed. As the representation is achieved by colored lines according to the magnitudes of the PPF in our system, a user can understand and evaluate the state of the PPF well.

  13. The impact of CFD on development test facilities - A National Research Council projection. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Korkegi, R. H.

    1983-01-01

    The results of a National Research Council study on the effect that advances in computational fluid dynamics (CFD) will have on conventional aeronautical ground testing are reported. Current CFD capabilities include the depiction of linearized inviscid flows and a boundary layer, initial use of Euler coordinates using supercomputers to automatically generate a grid, research and development on Reynolds-averaged Navier-Stokes (N-S) equations, and preliminary research on solutions to the full N-S equations. Improvements in the range of CFD usage is dependent on the development of more powerful supercomputers, exceeding even the projected abilities of the NASA Numerical Aerodynamic Simulator (1 BFLOP/sec). Full representation of the Re-averaged N-S equations will require over one million grid points, a computing level predicted to be available in 15 yr. Present capabilities allow identification of data anomalies, confirmation of data accuracy, and adequateness of model design in wind tunnel trials. Account can be taken of the wall effects and the Re in any flight regime during simulation. CFD can actually be more accurate than instrumented tests, since all points in a flow can be modeled with CFD, while they cannot all be monitored with instrumentation in a wind tunnel.

  14. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    PubMed

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  15. CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals

    NASA Astrophysics Data System (ADS)

    Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen

    A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.

  16. IEEE 1547 National Standard for Interconnecting Distributed Generation: How Could It Help My Facility? Preprint

    SciTech Connect

    Basso, T.; Friedman, N. R.

    2003-11-01

    This article summarizes the purpose, development, and impact of the Institute of Electrical and Electronics Engineers 1547 Standard for Interconnecting Distributed Resources With Electric Power Systems. Also included is a short explanation of supporting standards IEEE P1547.1, P1547.2, and P1547.3.

  17. Large-Scale Merging of Histograms using Distributed In-Memory Computing

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Ganis, Gerardo

    2015-12-01

    Most high-energy physics analysis jobs are embarrassingly parallel except for the final merging of the output objects, which are typically histograms. Currently, the merging of output histograms scales badly. The running time for distributed merging depends not only on the overall number of bins but also on the number partial histogram output files. That means, while the time to analyze data decreases linearly with the number of worker nodes, the time to merge the histograms in fact increases with the number of worker nodes. On the grid, merging jobs that take a few hours are not unusual. In order to improve the situation, we present a distributed and decentral merging algorithm whose running time is independent of the number of worker nodes. We exploit full bisection bandwidth of local networks and we keep all intermediate results in memory. We present benchmarks from an implementation using the parallel ROOT facility (PROOF) and RAMCloud, a distributed key-value store that keeps all data in DRAM.

  18. Astropulse: A search for microsecond transient radio signals using distributed computing

    NASA Astrophysics Data System (ADS)

    von Korff, Joshua Solomon

    I performed a transient, microsecond timescale radio sky survey, called "Astropulse," using the Arecibo telescope in Puerto Rico. Astropulse searches for brief (0.4 mus to 204.8 mus), wideband (relative to its 2.5 MHz bandwidth) radio pulses centered at 1,420 MHz, a range that includes the hyperfine hydrogen line. Astropulse is a commensal survey, obtaining its data by sharing telescope time with other surveys, such as PALFA. I scanned the sky visible to Arecibo, between declinations of --1.33 and 38.03 degrees, with varying dwell times depending on the requirements of our partner surveys. I analyzed 1,540 hours of data in each of 7 beams of the ALFA receiver, with 2 polarizations per beam, for a total of 21,600 hours of data. The data were 1-bit complex sampled at the Nyquist limit of 0.4 mus per sample. Examination of timescales less than 12.8 mus would have been impossible if not for my use of coherent dedispersion, a technique that has frequently been used for targeted observations, but has never before been associated with a radio sky survey. I performed nonlinear coherent dedispersion, reversing the broadening effects on signals caused by their passage through the interstellar medium (ISM). Coherent dedispersion requires intensive computations, and needs far more processing power than the more usual incoherent dedispersion. This processing power was provided by BOINC, the Berkeley Open Infrastructure for Network Computing. BOINC is a distributed computing system, which allowed me to utilize hundreds of thousands of volunteers' computers to perform the necessary calculations for coherent dedispersion. Each volunteer's computer requires about a week to process a single 8 MB "workunit," corresponding to 13 s of data from a single beam and polarization. In all, Astropulse analyzed over 48 TB of data. I did not aim to detect any particular astrophysical source, intending rather to perform a survey of the transient radio sky. Astrophysical events that might produce

  19. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018

  20. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    SciTech Connect

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

  1. Computational Methods for Decentralized Two-Level 0-1 Programming Problems through Distributed Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Niwa, Keiichi; Hayashida, Tomohiro; Sakawa, Masatoshi; Yang, Yishen

    2010-10-01

    We consider two-level programming problems in which there are one decision maker (the leader) at the upper level and two or more decision makers (the followers) at the lower level and decision variables of the leader and the followers are 0-1 variables. We assume that there is coordination among the followers while between the leader and the group of all the followers, there is no motivation to cooperate each other, and fuzzy goals for objective functions of the leader and the followers are introduced so as to take fuzziness of their judgments into consideration. The leader maximizes the degree of satisfaction (the value of the membership function) and the followers choose in concert in order to maximize a minimum among their degrees of satisfaction. We propose a modified computational method that solves problems related to the computational method based on the genetic algorithm (the existing method) for obtaining the Stackelberg solution. Specifically, the distributed genetic algorithm is introduced with respect to the upper level genetic algorithm, which handles decision variables for the leader in order to shorten the computational time of the existing method. Parallelization of the lower level genetic algorithm is also performed along with parallelization of the upper level genetic algorithm. In order to demonstrate the effectiveness of the proposed computational method, numerical experiments are carried out.

  2. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  3. Impact of Load Balancing on Unstructured Adaptive Grid Computations for Distributed-Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Simon, Horst D.; Sohn, Andrew

    1996-01-01

    The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.

  4. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore » radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  5. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  6. Computer-mediated distributed learning. An innovative program design in midwifery education.

    PubMed

    Treistman, J; Watson, D; Fullerton, J

    1996-01-01

    The State University of New York, Stony Brook (SUNY-SB) Pathways to Midwifery Program offers a distributed learning curriculum that is unique among American nurse-midwifery education programs. The Pathways to Midwifery Program provides asynchronic, computer-mediated instruction. Community-based faculty coordinate, supervise, and evaluate the clinical education of students. The SUNY-SB model offers an opportunity to increase dramatically the number of students who can receive the curriculum. It also provides distinct advantages in maintaining a curriculum database that reflects rapidly changing clinical science and that takes advantage of vast educational resources available through related computer networks. By creating a classroom without walls, the program is cost-effective.

  7. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  8. Metaheuristic based scheduling meta-tasks in distributed heterogeneous computing systems.

    PubMed

    Izakian, Hesam; Abraham, Ajith; Snášel, Václav

    2009-01-01

    Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO) algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.

  9. Run-time statistical estimation of task execution times for heterogeneous distributed computing

    SciTech Connect

    Iverson, M.A.; Oezguener, F.; Follen, G.J.

    1996-12-31

    In this paper, an efficient, run-time, statistical scheme for estimating the execution time of a task is presented, in order to facilitate run-time matching and scheduling in a distributed heterogeneous computing environment. This scheme is based upon a nonparametric regression technique, where the execution time estimate for a task is computed from past observations. Furthermore, this technique is able to compensate for different parameters upon which the execution time depends, and does not require any knowledge of the architecture of the target machine. It is also able to make accurate predictions when erroneous data is present in the set of observations, and has been experimentally shown to produce estimates with very low error even with few past values from which to calculate a new estimate.

  10. Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.

  11. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  12. Enhancing the transfer of computer-assisted training proficiency in geographically distributed teams.

    PubMed

    Kirkman, Bradley L; Rosen, Benson; Tesluk, Paul E; Gibson, Cristina B

    2006-05-01

    The authors examined factors that determine whether knowledge gained from computer-assisted (i.e., technology-based) team training in a geographically distributed team (GDT) context transfers to organizational results. They examined the moderating effects of team trust, technology support, and leader experience on the relation between teams' average individual training proficiency on a computer-assisted (i.e., CD-ROM-based) training program and team performance as assessed by team customer satisfaction ratings. Using data collected from 40 GDTs in a high-technology company, the authors found that the relation between teams' average training proficiency and team performance was complex and moderated by several factors. In particular, teams' average training proficiency had a positive association with customer satisfaction when GDTs were higher, rather than lower, in both trust and technology support and when team leaders had longer, rather than shorter, levels of tenure with their specific team. PMID:16737366

  13. System Analysis for the Huntsville Operation Support Center, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Massey, D.

    1985-01-01

    HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.

  14. Profiling of energy deposition fields in a modular HTHR with annular core: Computational/experimental studies at the ASTRA critical facility

    SciTech Connect

    Boyarinov, V. F.; Garin, V. P.; Glushkov, E. S.; Zimin, A. A.; Kompaniets, G. V.; Nevinitsa, V. A.; Polyakov, D. N.; Ponomarev, A. S.; Ponomarev-Stepnoi, N. N.; Smirnov, O. N.; Fomichenko, P. A.; Chunyaev, E. I.; Marova, E. V.; Sukharev, Yu. P.

    2010-12-15

    The paper presents the results obtained from the computational/experimental studies of the spatial distribution of the {sup 235}U fission reaction rate in a critical assembly with an annular core and poison profiling elements inserted into the inner graphite reflector. The computational analysis was carried out with the codes intended for design computation of an HTHR-type reactor.

  15. An advanced combustion research facility for validating computational fluid dynamics codes

    NASA Astrophysics Data System (ADS)

    Bullard, J. B.; Hurley, C. D.; Eccles, N. C.

    1991-12-01

    The Sector Combustion Rig (SCR), built to obtain experimental data which could be used to verify computational fluid dynamic programs and to investigate the formation and consumption of combustion products through a combustor, is described. This rig was designed to accommodate sectors of full size engine combustion chambers and to test them at real or simulated engine operating conditions. Changes made to improve the operating, measurement, and data handling capabilities of the rig as a result of experience from several years of operations are described together with some of the features which contribute to the uniqueness of the SCR. The SCR gas analysis system and instrumentation are described. Extracts from some results obtained during a recent program of tests on a Rolls-Royce RB211 combustor are given.

  16. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  17. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results. PMID:24808576

  18. Running WRF on various distributed computing infrastructures through a standard-based Science Gateway

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; La Rocca, Giuseppe; Markussen Lunde, Torleif; Pehrson, Bjorn

    2014-05-01

    The Weather Research and Forecasting (WRF) modelling system is a widely used meso-scale numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. WRF has a large worldwide community counting more than 20,000 users in 130 countries and it has been specifically designed to be the state-of-the-art atmospheric simulation system being portable and running efficiently on available parallel computing platforms. Although WRF can be executed in many different environments ranging form the single core inside a stand-alone machine up to the most sophisticated HPC platforms, there are no solutions yet to match the e-Science paradigm where software, data and users are "linked" together by the network as components of distributed computing infrastructures. The topmost component of the typical e-Science model consists of Science Gateways, defined as community-developed sets of tools, applications, and data collections that normally are integrated via a portal to get access to a distributed infrastructure. One of the many available Science Gateway solutions is the Catania Science Gateway Framework (CSGF - www.catania-science-gateways.it) whose most descriptive keywords are: standard adoption, interoperability and standard adoption. The support of standards such as SAGA and SAML allows any CSGF user to seamlessly access and use both Grid and Cloud-based resources. In this work we present the CSGF and how it has been used in the context of the eI4frica project (www.ei4africa.eu) to implement the Africa Grid Science Gateway (http://sgw.africa-grid.org), which allows to execute WRF simulations on various kinds of distributed computing infrastructures at the same time, including the EGI Federated Cloud.

  19. Implementing a Physician's Workstation using client/server technology and the distributed computing environment.

    PubMed Central

    Pham, T. Q.; Young, C. Y.; Tang, P. C.; Suermondt, H. J.; Annevelink, J.

    1994-01-01

    PWS is a physician's workstation research prototype developed to explore the use of information management tools by physicians in the context of patient care. The original prototype was implemented in a client/server architecture using a broadcast message server. As we expanded the scope of the prototyping activities, we identified the limitations of the broadcast message server in the areas of scalability, security, and interoperability. To address these issues, we reimplemented PWS using the Open Software Foundation's Distributed Computing Environment (DCE). We describe the rationale for using DCE, the migration process, and the benefits achieved. Future work and recommendations are discussed. PMID:7950003

  20. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  1. Computing an NPMLE for a mixing distribution in two closed heterogeneous population size models.

    PubMed

    Mao, Chang Xuan

    2008-12-01

    Binomial and geometric mixtures can be used to model data gathered in capture-recapture surveys of animal populations, removal surveys of harvest populations, registrations of disease populations, ecological species census, and so on. To compute a nonparametric maximum likelihood estimator for the mixing distribution of heterogeneous capture probabilities, we consider a conditional approach and use a reliable and fast integrative procedure which combines the EM algorithm to increase the likelihood and the vertex-exchange method to update the number of support points. A convergent Newtonian algorithm is used in the M-step of the EM algorithm.

  2. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-08-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  3. Computing an NPMLE for a mixing distribution in two closed heterogeneous population size models.

    PubMed

    Mao, Chang Xuan

    2008-12-01

    Binomial and geometric mixtures can be used to model data gathered in capture-recapture surveys of animal populations, removal surveys of harvest populations, registrations of disease populations, ecological species census, and so on. To compute a nonparametric maximum likelihood estimator for the mixing distribution of heterogeneous capture probabilities, we consider a conditional approach and use a reliable and fast integrative procedure which combines the EM algorithm to increase the likelihood and the vertex-exchange method to update the number of support points. A convergent Newtonian algorithm is used in the M-step of the EM algorithm. PMID:18821726

  4. pyCTQW: A continuous-time quantum walk simulator on distributed memory computers

    NASA Astrophysics Data System (ADS)

    Izaac, Josh A.; Wang, Jingbo B.

    2015-01-01

    In the general field of quantum information and computation, quantum walks are playing an increasingly important role in constructing physical models and quantum algorithms. We have recently developed a distributed memory software package pyCTQW, with an object-oriented Python interface, that allows efficient simulation of large multi-particle CTQW (continuous-time quantum walk)-based systems. In this paper, we present an introduction to the Python and Fortran interfaces of pyCTQW, discuss various numerical methods of calculating the matrix exponential, and demonstrate the performance behavior of pyCTQW on a distributed memory cluster. In particular, the Chebyshev and Krylov-subspace methods for calculating the quantum walk propagation are provided, as well as methods for visualization and data analysis.

  5. Estimation of lifetime distributions on 1550-nm DFB laser diodes using Monte-Carlo statistic computations

    NASA Astrophysics Data System (ADS)

    Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc

    2004-09-01

    High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and

  6. Temperature Distribution Within a Defect-Free Silicon Carbide Diode Predicted by a Computational Model

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Neudeck, Philip G.

    2000-01-01

    Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three

  7. Distribution of brominated flame retardants in different dust fractions in air from an electronics recycling facility.

    PubMed

    Julander, Anneli; Westberg, Håkan; Engwall, Magnus; van Bavel, Bert

    2005-11-01

    Twelve air samples were collected from an electronic recycling facility in Sweden representing three different dust fractions; respirable, total and inhalable dust. Four samples were collected from each fraction. The highest concentration of polybrominated diphenyl ether (PBDE) #209 (ten bromine atoms) was found in the samples from the inhalable dust fraction (ID), which was 10 times higher than for the "total dust" fraction (TD). The concentration ranges were 157.6-208.6; 13.9-16.7; and 2.8-3.3 ng/m3 for inhalable, total and respirable fractions, respectively. The second most abundant PBDE congener was PBDE #183 (seven bromine atoms), followed by the second most abundant substance 1,2-bis(2,4,6-tribromophenoxy)ethane (BTBPE) in all samples. In addition, decabromodiphenyl ethane (DeBDethane) was tentatively identified in five of the samples. Because of the large differences in air concentrations between the three fractions in ID, TD and RD, it is suggested that the inhalable instead of "total dust" fraction should be used to assess air concentrations, in particular for the larger and higher brominated flame retardants (BFRs).

  8. Evaluation of Optimal Distribution of Wind Power Facilities in Iowa for 2015

    SciTech Connect

    Factor, T.; Milligan, M.

    1999-08-05

    By the end of June 1999, about 250 megawatts of wind generation will have been dedicated in the state of Iowa. This represents the beginning of what is likely to be significant wind capacity development during the next 20 years in the state, as a result of possible public and governmental mandates and consumers' desire for sustainable sources of energy. As the utility industry in the United States moves towards a new structure, renewable energy sources continue to be an important part of new resource development. In this paper, we consider the predicted trends in load growth in Iowa. After accounting for the retirement of nuclear and older fossil fuel facilities over the next 15 years, we estimate Iowa's potential renewable generating capacity through the year 2015 and anticipate the contribution of wind energy to Iowa's portfolio. The Iowa Wind Energy Institute (IWEI) has been monitoring the wind resource in Iowa since June 1994 to obtain wind speed averages at 10, 33 and 50 meters above ground at fourteen geographically dispersed potential wind farm sites. Winds in the Midwest are primarily generated by fronts moving through the region. The Northwest Buffalo Ridge area of Iowa typically has wind speed averages of 7-8 m/s. Central Iowa may have typical winds slightly below this mean value. However, as a front passes through the state, there will be times when a wind farm in Central Iowa will produce more energy than one on Buffalo Ridge.

  9. A study of coupled rotor-fuselage vibration with higher harmonic control using a symbolic computing facility

    NASA Technical Reports Server (NTRS)

    Papavassiliou, I.; Venkatesan, C.; Friedmann, P. P.

    1990-01-01

    A fundamental study of vibration prediction and vibration reduction in helicopters using active controls was performed. The nonlinear equations of motion for a coupled rotor/flexible fuselage system have been derived using computer algebra on a special purpose symbolic computing facility. The details of the derivation using the MACSYMA program are described. The trim state and vibratory response of the helicopter are obtained in a single pass by applying the harmonic balance technique and simultaneously satisfying the trim and the vibratory response of the helicopter for all rotor and fuselage degrees of freedom. The influence of the fuselage flexibility on the vibratory response is studied. It is shown that the conventional single frequency higher harmonic control (HHC) capable of reducing either the hub loads or only the fuselage vibrations but not both simultaneously. It is demonstrated that for simultaneous reduction of hub shears and fuselage vibrations a new scheme called multiple higher harmonic control (MHHC) is required. The fundamental aspects of this scheme and its uniqueness are described in detail, providing new insight on vibration reduction in helicopters using HHC.

  10. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  11. Distribution of radionuclides in an iron calibration standard for a free release measurement facility.

    PubMed

    Hult, Mikael; Stroh, Heiko; Marissens, Gerd; Tzika, Faidra; Lutter, Guillaume; Šurán, Jiri; Kovar, Petr; Skala, Lukas; Sud, Jaromír

    2016-03-01

    A Europallet-sized calibration standard composed of 12 grey cast iron tubes contaminated with (60)Co and (110m)Ag with a mass of 246kg was developed. As the tubes were produced through centrifugal casting it was of particular concern to study the distribution of radionuclides in the radial direction of the tubes. This was done by removing 72 small samples (swarf) of ~0.3g each on both the inside and outside of the tubes. All of the samples were measured in the underground laboratory HADES.

  12. Distribution of radionuclides in an iron calibration standard for a free release measurement facility.

    PubMed

    Hult, Mikael; Stroh, Heiko; Marissens, Gerd; Tzika, Faidra; Lutter, Guillaume; Šurán, Jiri; Kovar, Petr; Skala, Lukas; Sud, Jaromír

    2016-03-01

    A Europallet-sized calibration standard composed of 12 grey cast iron tubes contaminated with (60)Co and (110m)Ag with a mass of 246kg was developed. As the tubes were produced through centrifugal casting it was of particular concern to study the distribution of radionuclides in the radial direction of the tubes. This was done by removing 72 small samples (swarf) of ~0.3g each on both the inside and outside of the tubes. All of the samples were measured in the underground laboratory HADES. PMID:26597655

  13. Facile synthesis of concentrated gold nanoparticles with low size-distribution in water: temperature and pH controls

    PubMed Central

    2011-01-01

    The citrate reduction method for the synthesis of gold nanoparticles (GNPs) has known advantages but usually provides the products with low nanoparticle concentration and limits its application. Herein, we report a facile method to synthesize GNPs from concentrated chloroauric acid (2.5 mM) via adding sodium hydroxide and controlling the temperature. It was found that adding a proper amount of sodium hydroxide can produce uniform concentrated GNPs with low size distribution; otherwise, the largely distributed nanoparticles or instable colloids were obtained. The low reaction temperature is helpful to control the nanoparticle formation rate, and uniform GNPs can be obtained in presence of optimized NaOH concentrations. The pH values of the obtained uniform GNPs were found to be very near to neutral, and the pH influence on the particle size distribution may reveal the different formation mechanism of GNPs at high or low pH condition. Moreover, this modified synthesis method can save more than 90% energy in the heating step. Such environmental-friendly synthesis method for gold nanoparticles may have a great potential in large-scale manufacturing for commercial and industrial demand. PMID:21733153

  14. Classroom Note: Computational and Conceptual Understanding of the Connections among Standard Deviations, Z-Scores, and Normal Distributions

    ERIC Educational Resources Information Center

    Reyes, Melissa Lopez

    2003-01-01

    A structure for learning the connections among standard deviations, z-scores, and normal distributions is presented. The components of this structure are classified into intuitive or previously learned conceptual knowledge, computational knowledge, and formalized conceptual knowledge. (Contains 1 figure.)

  15. Concentration, distribution, and human health risk assessment of endosulfan from a manufacturing facility in Huai'an, China.

    PubMed

    Wang, De-Gao; Alaee, Mehran; Guo, Ming-Xing; Pei, Wei; Wu, Qian

    2014-09-01

    Endosulfan concentrations and its distribution in air, soil, sediment and foodstuffs in the area surrounding a production facility in Huai'an, China were investigated because of its threats to the environment and human health. Air concentrations for endosulfan Ι, endosulfan II and endosulfan sulfate measured in this study were several orders of magnitude higher than those reported previously for this region. Surface soil concentration ranges of endosulfan I, endosulfan II, and endosulfan sulfate were greater than in sediment. Endosulfan II was the greatest contributor to total endosulfan concentrations in both surface sediment and soil followed by endosulfan sulfate and endosulfan Ι. However, a different concentration profile was observed in fish and crop samples, with endosulfan sulfate having the highest concentration followed by endosulfan I and endosulfan II. The concentration of Σendosulfans (endosulfans Ι and II) in soil decreased rapidly with increasing distance from the plant by a factor of 10 within 45 km. Trace amounts of Σendosulfans were observed in deep soil layers which implied that these compounds are transported through the leaching of pore water in soil. This demonstrated that emissions from the manufacturing facility can lead to ground water contamination in the area near the plant. A screening level human health risk assessment of Σendosulfans based on the worst-case scenario was performed for people living in the vicinity of the manufacturing facility. The hazard indices were at least 2 orders of magnitude of <1, indicating no adverse health effects are likely to occur at current exposure levels, and the risk to human health is generally acceptable.

  16. Distribution and identification of culturable airborne microorganisms in a Swiss milk processing facility.

    PubMed

    Brandl, Helmut; Fricker-Feer, Claudia; Ziegler, Dominik; Mandal, Jyotshna; Stephan, Roger; Lehner, Angelika

    2014-01-01

    Airborne communities (mainly bacteria) were sampled and characterized (concentration levels and diversity) at 1 outdoor and 6 indoor sites within a Swiss dairy production facility. Air samples were collected on 2 sampling dates in different seasons, one in February and one in July 2012 using impaction bioaerosol samplers. After cultivation, isolates were identified by mass spectrometry (matrix-assisted laser desorption/ionization-time-of-flight) and molecular (sequencing of 16S rRNA and rpoB genes) methods. In general, total airborne particle loads and total bacterial counts were higher in winter than in summer, but remained constant within each indoor sampling site at both sampling times (February and July). Bacterial numbers were generally very low (<100 cfu/m(3) of air) during the different steps of milk powder production. Elevated bacterial concentrations (with mean values of 391 ± 142 and 179 ± 33 cfu/m(3) of air during winter and summer sampling, respectively; n=15) occurred mainly in the "logistics area," where products in closed tins are packed in secondary packaging material and prepared for shipping. However, total bacterial counts at the outdoor site varied, with a 5- to 6-fold higher concentration observed in winter compared with summer. Twenty-five gram-positive and gram-negative genera were identified as part of the airborne microflora, with Bacillus and Staphylococcus being the most frequent genera identified. Overall, the culturable microflora community showed a composition typical and representative for the specific location. Bacterial counts were highly correlated with total airborne particles in the size range 1 to 5 µm, indicating that a simple surveillance system based upon counting of airborne particles could be implemented. The data generated in this study could be used to evaluate the effectiveness of the dairy plant's sanitation program and to identify potential sources of airborne contamination, resulting in increased food safety.

  17. Using Java for distributed computing in the Gaia satellite data processing

    NASA Astrophysics Data System (ADS)

    O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose

    2011-10-01

    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.

  18. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  19. From Distributional Semantics to Conceptual Spaces: A Novel Computational Method for Concept Creation

    NASA Astrophysics Data System (ADS)

    McGregor, Stephen; Agres, Kat; Purver, Matthew; Wiggins, Geraint A.

    2015-12-01

    We investigate the relationship between lexical spaces and contextually-defined conceptual spaces, offering applications to creative concept discovery. We define a computational method for discovering members of concepts based on semantic spaces: starting with a standard distributional model derived from corpus co-occurrence statistics, we dynamically select characteristic dimensions associated with seed terms, and thus a subspace of terms defining the related concept. This approach performs as well as, and in some cases better than, leading distributional semantic models on a WordNet-based concept discovery task, while also providing a model of concepts as convex regions within a space with interpretable dimensions. In particular, it performs well on more specific, contextualized concepts; to investigate this we therefore move beyond WordNet to a set of human empirical studies, in which we compare output against human responses on a membership task for novel concepts. Finally, a separate panel of judges rate both model output and human responses, showing similar ratings in many cases, and some commonalities and divergences which reveal interesting issues for computational concept discovery.

  20. Computationally driven deletion of broadly distributed T cell epitopes in a biotherapeutic candidate

    PubMed Central

    Salvat, Regina S.; Parker, Andrew S; Guilliams, Andrew; Choi, Yoonjoo

    2014-01-01

    Biotherapeutics are subject to immune surveillance within the body, and anti-biotherapeutic immune responses can compromise drug efficacy and patient safety. Initial development of targeted antidrug immune memory is coordinated by T cell recognition of immunogenic subsequences, termed “T cell epitopes.” Biotherapeutics may therefore be deimmunized by mutating key residues within cognate epitopes, but there exist complex trade-offs between immunogenicity, mutational load, and protein structure–function. Here, a protein deimmunization algorithm has been applied to P99 beta-lactamase, a component of antibody-directed enzyme prodrug therapies. The algorithm, integer programming for immunogenic proteins, seamlessly integrates computational prediction of T cell epitopes with both 1- and 2-body sequence potentials that assess protein tolerance to epitope-deleting mutations. Compared to previously deimmunized P99 variants, which bore only one or two mutations, the enzymes designed here contain 4–5 widely distributed substitutions. As a result, they exhibit broad reductions in major histocompatibility complex recognition. Despite their high mutational loads and markedly reduced immunoreactivity, all eight engineered variants possessed wild-type or better catalytic activity. Thus, the protein design algorithm is able to disrupt broadly distributed epitopes while maintaining protein function. As a result, this computational tool may prove useful in expanding the repertoire of next-generation biotherapeutics. PMID:24880662