Science.gov

Sample records for facility distributed computer

  1. DNET: A communications facility for distributed heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.

    1989-01-01

    This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.

  2. Improving CMS data transfers among its distributed computing facilities

    NASA Astrophysics Data System (ADS)

    Flix, J.; Magini, N.; Sartirana, A.

    2011-12-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  3. Lustre Distributed Name Space (DNE) Evaluation at the Oak Ridge Leadership Computing Facility (OLCF)

    SciTech Connect

    Simmons, James S.; Leverman, Dustin B.; Hanley, Jesse A.; Oral, Sarp

    2016-08-22

    This document describes the Lustre Distributed Name Space (DNE) evaluation carried at the Oak Ridge Leadership Computing Facility (OLCF) between 2014 and 2015. DNE is a development project funded by the OpenSFS, to improve Lustre metadata performance and scalability. The development effort has been split into two parts, the first part (DNE P1) providing support for remote directories over remote Lustre Metadata Server (MDS) nodes and Metadata Target (MDT) devices, while the second phase (DNE P2) addressed split directories over multiple remote MDS nodes and MDT devices. The OLCF have been actively evaluating the performance, reliability, and the functionality of both DNE phases. For these tests, internal OLCF testbed were used. Results are promising and OLCF is planning on a full DNE deployment by mid-2016 timeframe on production systems.

  4. The Overview of the National Ignition Facility Distributed Computer Control System

    NASA Astrophysics Data System (ADS)

    Lagin, Lawrence

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008.

  5. Distributed Computing.

    ERIC Educational Resources Information Center

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  6. Distributed Computing.

    ERIC Educational Resources Information Center

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  7. Distributed computer control system in the Nova Laser Fusion Test Facility

    SciTech Connect

    Not Available

    1985-09-01

    The EE Technical Review has two purposes - to inform readers of various activities within the Electronics Engineering Department and to promote the exchange of ideas. The articles, by design, are brief summaries of EE work. The articles included in this report are as follows: Overview - Nova Control System; Centralized Computer-Based Controls for the Nova Laser Facility; Nova Pulse-Power Control System; Nova Laser Alignment Control System; Nova Beam Diagnostic System; Nova Target-Diagnostics Control System; and Nova Shot Scheduler. The 7 papers are individually abstracted.

  8. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  9. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  10. A computational test facility for distributed analysis of gravitational wave signals

    NASA Astrophysics Data System (ADS)

    Amico, P.; Bosi, L.; Cattuto, C.; Gammaitoni, L.; Punturo, M.; Travasso, F.; Vocca, H.

    2004-03-01

    In the gravitational wave detector Virgo, the in-time detection of a gravitational wave signal from a coalescing binary stellar system is an intensive computational task. A parallel computing scheme using the message passing interface (MPI) is described. Performance results on a small-scale cluster are reported.

  11. Chapter on Distributed Computing

    DTIC Science & Technology

    1989-02-01

    MASSACHUSETTS LABORATORY FOR INSTITUTE OF COMPUTER SCIENCE TECHNOLOGY ("D / o O MIT/LCS/TM-384 CHAPTER ON DISTRIBUTED COMPUTING Leslie Lamport Nancy...22217 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Miude Secuwity Ciaifiation) Chapter on Distributed Computing 12. PERSONAL AUTHOR(S) Lamport... distributed computing , distributed systems models, dis- tributed algorithms, message-passing, shared variables, 19. UBSTRACT (Continue on reverse if

  12. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  13. AMRITA -- A computational facility

    SciTech Connect

    Shepherd, J.E.; Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  14. Program Facilitates Distributed Computing

    NASA Technical Reports Server (NTRS)

    Hui, Joseph

    1993-01-01

    KNET computer program facilitates distribution of computing between UNIX-compatible local host computer and remote host computer, which may or may not be UNIX-compatible. Capable of automatic remote log-in. User communicates interactively with remote host computer. Data output from remote host computer directed to local screen, to local file, and/or to local process. Conversely, data input from keyboard, local file, or local process directed to remote host computer. Written in ANSI standard C language.

  15. Distributed computing in bioinformatics.

    PubMed

    Jain, Eric

    2002-01-01

    This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.

  16. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  17. Distributed Computing and Collaboration Framework (DCCF)

    DTIC Science & Technology

    2002-09-01

    The Distributed Computing and Collaboration Framework has been developed by the Space and Naval Warfare Systems Center, San Diego (a Naval research and development facility), under the sponsorship of the Office of Naval

  18. Quantum computing Hyper Terahertz Facility opens

    NASA Astrophysics Data System (ADS)

    Singh Chadha, Kulvinder

    2016-01-01

    A new facility has opened at the University of Surrey to use terahertz radiation for quantum computing. The Hyper Terahertz Facility (HTF) is a joint collaboration between the University of Surrey and the National Physical Laboratory (NPL).

  19. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  20. 2015 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  1. 2014 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  2. Building a Computable Facility Model

    DTIC Science & Technology

    2002-10-01

    Building Composer; facility design; facility management; Fort Future; decision support tools; installation design; integrated software; simulation ... modeling 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 4 19. NAME OF RESPONSIBLE PERSON Wolfe

  3. The Survivable Distributed Computing Environment

    DTIC Science & Technology

    1994-06-01

    an architecture for a survivable Distributed Computing Environment (SDCE). In essence, the SDCE will be a base upon which survivable distributed...and/or ISIS distributed Computing Environments to provide many of the SDCE requirements.

  4. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  5. Simplified Distributed Computing

    NASA Astrophysics Data System (ADS)

    Li, G. G.

    2006-05-01

    The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is

  6. Computational analysis of experimental results on spatial distributions of fission reaction rates in the annular core of a modular HTGR, obtained at the ASTRA critical facility

    SciTech Connect

    Boyarinov, V. F.; Glushkov, E. S.; Fomichenko, P. A.; Kompaniets, G. V.; Krutov, A. M.; Marova, E. V.; Nevinitsa, V. A.; Polyakov, D. N.; Smirnov, O. N.; Sukharev, Y. P.; Zimin, A. A.

    2006-07-01

    The paper presents computational analysis of some experimental results on spatial distribution of {sup 235}U fission reaction rates in a critical assembly with the annular core and different configurations of safety rods, placed into the inner reflector, made of graphite. Presented computational analysis of experimental data was performed with the set of codes used in HTGR design calculations. (authors)

  7. 2016 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, Jim; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2016-01-01

    The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.

  8. The fermilab central computing facility architectural model

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1989-12-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing enviroment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a LargeScale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM computing engine, ACP farms, and (primary) VMS workstations. This paper will discuss the implemetation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab.

  9. The Laboratory for Oceans Computing Facility

    NASA Technical Reports Server (NTRS)

    Kao, R.

    1988-01-01

    The first VAX computer in the Laboratory for Oceans Computing Facility (LOCF) was installed and the facility was largely expanded. The growth is not only in hardware and software, but also in the number of users and in supporting research and development projects. The LOCF serves as a general purpose computing facility for: ocean color research projects, sea ice research projects, processing of the Nimbus-7 Coastal Zone Color Scanner data set, real time ingest and analysis of TIROS-N satellite data, study of the Synthetic Aperture Radar data, study of LANDSAT data, and many others. The physical space and the electrical power layout of the computing room were modified to accommodate all the equipment. The LOCF has several image processing stations which include two International Imaging Systems (IIS) model 75 processors and one Adage processor. The facility has the capability of ingesting the TIROS-N HRPT satellite data on a real time basis. More than 30 software packages were installed on the systems. System software packages, network software, FORTRAN and C compilers, database management software, image processing software, graphics, mathematics and statistics packages, TAE, Catalog Manager, GEMPAK, LAS and many other software developed on the LOCF computers such as SEAPAK have greatly advanced the capability of the LOCF.

  10. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  11. Distributed Real-Time Computing with Harness

    SciTech Connect

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  12. Knowledge and Distributed computation

    DTIC Science & Technology

    1990-05-01

    convincing evidence that reasoning in terms of knowledge can lead to .. n... uif.yi ...... lts" about diStfibuLuc computation, and we extend the standard...can be made precise in the context of computer science. In this thesis, we pro- vide convincing evidence that reasoning in terms of knowledge can lead ...against different adversaries. We show how different adversaries lead to different definitions of probabilistic knowledge, and given a particular adversary

  13. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  14. A large-scale computer facility for computational aerodynamics

    SciTech Connect

    Bailey, F.R.; Balhaus, W.F.

    1985-02-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans.

  15. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  16. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  17. BESIII production with distributed computing

    NASA Astrophysics Data System (ADS)

    Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.

  18. Oak Ridge Leadership Computing Facility Position Paper

    SciTech Connect

    Oral, H Sarp; Hill, Jason J; Thach, Kevin G; Podhorszki, Norbert; Klasky, Scott A; Rogers, James H; Shipman, Galen M

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  19. Sputnik: ad hoc distributed computation.

    PubMed

    Völkel, Gunnar; Lausser, Ludwig; Schmid, Florian; Kraus, Johann M; Kestler, Hans A

    2015-04-15

    In bioinformatic applications, computationally demanding algorithms are often parallelized to speed up computation. Nevertheless, setting up computational environments for distributed computation is often tedious. Aim of this project were the lightweight ad hoc set up and fault-tolerant computation requiring only a Java runtime, no administrator rights, while utilizing all CPU cores most effectively. The Sputnik framework provides ad hoc distributed computation on the Java Virtual Machine which uses all supplied CPU cores fully. It provides a graphical user interface for deployment setup and a web user interface displaying the current status of current computation jobs. Neither a permanent setup nor administrator privileges are required. We demonstrate the utility of our approach on feature selection of microarray data. The Sputnik framework is available on Github http://github.com/sysbio-bioinf/sputnik under the Eclipse Public License. hkestler@fli-leibniz.de or hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  1. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  2. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  3. Computing spatial information from Fourier coefficient distributions.

    PubMed

    Heinz, William F; Werbin, Jeffrey L; Lattman, Eaton; Hoh, Jan H

    2011-05-01

    The spatial relationships between molecules can be quantified in terms of information. In the case of membranes, the spatial organization of molecules in a bilayer is closely related to biophysically and biologically important properties. Here, we present an approach to computing spatial information based on Fourier coefficient distributions. The Fourier transform (FT) of an image contains a complete description of the image, and the values of the FT coefficients are uniquely associated with that image. For an image where the distribution of pixels is uncorrelated, the FT coefficients are normally distributed and uncorrelated. Further, the probability distribution for the FT coefficients of such an image can readily be obtained by Parseval's theorem. We take advantage of these properties to compute the spatial information in an image by determining the probability of each coefficient (both real and imaginary parts) in the FT, then using the Shannon formalism to calculate information. By using the probability distribution obtained from Parseval's theorem, an effective distance from the uncorrelated or most uncertain case is obtained. The resulting quantity is an information computed in k-space (kSI). This approach provides a robust, facile and highly flexible framework for quantifying spatial information in images and other types of data (of arbitrary dimensions). The kSI metric is tested on a 2D Ising model, frequently used as a model for lipid bilayer; and the temperature-dependent phase transition is accurately determined from the spatial information in configurations of the system.

  4. Particle Size Distribution in Aluminum Manufacturing Facilities

    PubMed Central

    Liu, Sa; Noth, Elizabeth M.; Dixon-Ernst, Christine; Eisen, Ellen A.; Cullen, Mark R.; Hammond, S. Katharine

    2015-01-01

    As part of exposure assessment for an ongoing epidemiologic study of heart disease and fine particle exposures in aluminum industry, area particle samples were collected in production facilities to assess instrument reliability and particle size distribution at different process areas. Personal modular impactors (PMI) and Minimicro-orifice uniform deposition impactors (MiniMOUDI) were used. The coefficient of variation (CV) of co-located samples was used to evaluate the reproducibility of the samplers. PM2.5 measured by PMI was compared to PM2.5 calculated from MiniMOUDI data. Mass median aerodynamic diameter (MMAD) and concentrations of sub-micrometer (PM1.0) and quasi-ultrafine (PM0.56) particles were evaluated to characterize particle size distribution. Most of CVs were less than 30%. The slope of the linear regression of PMI_PM2.5 versus MiniMOUDI_PM2.5 was 1.03 mg/m3 per mg/m3 (± 0.05), with correlation coefficient of 0.97 (± 0.01). Particle size distribution varied substantively in smelters, whereas it was less variable in fabrication units with significantly smaller MMADs (arithmetic mean of MMADs: 2.59 μm in smelters vs. 1.31 μm in fabrication units, p = 0.001). Although the total particle concentration was more than two times higher in the smelters than in the fabrication units, the fraction of PM10 which was PM1.0 or PM0.56 was significantly lower in the smelters than in the fabrication units (p < 0.001). Consequently, the concentrations of sub-micrometer and quasi-ultrafine particles were similar in these two types of facilities. It would appear, studies evaluating ultrafine particle exposure in aluminum industry should focus on not only the smelters, but also the fabrication facilities. PMID:26478760

  5. Particle Size Distribution in Aluminum Manufacturing Facilities.

    PubMed

    Liu, Sa; Noth, Elizabeth M; Dixon-Ernst, Christine; Eisen, Ellen A; Cullen, Mark R; Hammond, S Katharine

    2014-10-01

    As part of exposure assessment for an ongoing epidemiologic study of heart disease and fine particle exposures in aluminum industry, area particle samples were collected in production facilities to assess instrument reliability and particle size distribution at different process areas. Personal modular impactors (PMI) and Minimicro-orifice uniform deposition impactors (MiniMOUDI) were used. The coefficient of variation (CV) of co-located samples was used to evaluate the reproducibility of the samplers. PM2.5 measured by PMI was compared to PM2.5 calculated from MiniMOUDI data. Mass median aerodynamic diameter (MMAD) and concentrations of sub-micrometer (PM1.0) and quasi-ultrafine (PM0.56) particles were evaluated to characterize particle size distribution. Most of CVs were less than 30%. The slope of the linear regression of PMI_PM2.5 versus MiniMOUDI_PM2.5 was 1.03 mg/m(3) per mg/m(3) (± 0.05), with correlation coefficient of 0.97 (± 0.01). Particle size distribution varied substantively in smelters, whereas it was less variable in fabrication units with significantly smaller MMADs (arithmetic mean of MMADs: 2.59 μm in smelters vs. 1.31 μm in fabrication units, p = 0.001). Although the total particle concentration was more than two times higher in the smelters than in the fabrication units, the fraction of PM10 which was PM1.0 or PM0.56 was significantly lower in the smelters than in the fabrication units (p < 0.001). Consequently, the concentrations of sub-micrometer and quasi-ultrafine particles were similar in these two types of facilities. It would appear, studies evaluating ultrafine particle exposure in aluminum industry should focus on not only the smelters, but also the fabrication facilities.

  6. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  7. Towards an Infrastructure for MLS Distributed Computing

    DTIC Science & Technology

    1998-01-01

    Distributed computing owes its success to the development of infrastructure, middleware, and standards (e.g., CORBA) by the computing industry. This...Government must protect national security information against unauthorized information flow. To support MLS distributed computing , a MLS infrastructure...protection of classified information and use both the emerging distributed computing and commercial security infrastructures. The resulting infrastructure

  8. Computational study of radiation doses at UNLV accelerator facility

    NASA Astrophysics Data System (ADS)

    Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel

    2017-09-01

    A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.

  9. Distributed Computing in Universities and Colleges.

    ERIC Educational Resources Information Center

    Sircar, Sumit

    1979-01-01

    Analyzes the implications of distributed computing in institutions of higher education. Discusses (1) the extent to which the quality of computing might be enhanced by adopting a distributed computing approach, (2) variations in distributed systems design and the cost of adoption, and (3) administration of distributed systems. (Author/CMV)

  10. Apollo experience report: Real-time auxiliary computing facility development

    NASA Technical Reports Server (NTRS)

    Allday, C. E.

    1972-01-01

    The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.

  11. National Ignition Facility integrated computer control system

    NASA Astrophysics Data System (ADS)

    Van Arsdall, Paul J.; Bettenhausen, R. C.; Holloway, Frederick W.; Saroyan, R. A.; Woodruff, J. P.

    1999-07-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control system. The framework provides an open, extensive architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. THe ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensor to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  12. National Ignition Facility integrated computer control system

    SciTech Connect

    Van Arsdall, P.J., LLNL

    1998-06-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  13. Overlapping clusters for distributed computation.

    SciTech Connect

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  14. Computer Profile of School Facilities Energy Consumption.

    ERIC Educational Resources Information Center

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  15. Cooperative Fault Tolerant Distributed Computing

    SciTech Connect

    Fagg, Graham E.

    2006-03-15

    HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.

  16. Pair distribution function computed tomography.

    PubMed

    Jacques, Simon D M; Di Michiel, Marco; Kimber, Simon A J; Yang, Xiaohao; Cernik, Robert J; Beale, Andrew M; Billinge, Simon J L

    2013-01-01

    An emerging theme of modern composites and devices is the coupling of nanostructural properties of materials with their targeted arrangement at the microscale. Of the imaging techniques developed that provide insight into such designer materials and devices, those based on diffraction are particularly useful. However, to date, these have been heavily restrictive, providing information only on materials that exhibit high crystallographic ordering. Here we describe a method that uses a combination of X-ray atomic pair distribution function analysis and computed tomography to overcome this limitation. It allows the structure of nanocrystalline and amorphous materials to be identified, quantified and mapped. We demonstrate the method with a phantom object and subsequently apply it to resolving, in situ, the physicochemical states of a heterogeneous catalyst system. The method may have potential impact across a range of disciplines from materials science, biomaterials, geology, environmental science, palaeontology and cultural heritage to health.

  17. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  18. Performance of the ISIS Distributed Computing Toolkit

    DTIC Science & Technology

    1994-06-22

    Best Available Copy .. A a ~ d ~ . 1) - . Fs’A aiaer rnrgC"opyr~IL tI.ru~ Performance of the ISIS Distributed Computing Toolkit* Kenneth P. Birman...isis.com. Please cite as Technical Report TR-94-1432, Dept. of Computer Science, Cornell University. Performance of the Isis Distributed Computing Toolkit... Distributed computing , performance, process groups, atomic broadcast, causal and total message ordering, cbcast, abcast, multiple process groups

  19. Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1982-06-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems...incorrect. Additionally, although fault-tolerance is usually listed as an advantage of distributed computing systems, little has been done to analyze

  20. The Feasibility of Replacing Existing Central Computers with A Single Computer Facility.

    ERIC Educational Resources Information Center

    Richey, R. Wayne

    1979-01-01

    The feasibility of replacing existing central processing units with a single computer facility is discussed. An analysis of the computing facilities for the Iowa state universities is presented and supports the retention of decentralized facilities. Efficiency, costs, and operating considerations are examined. (SF)

  1. VLSI Design, Parallel Computation and Distributed Computing

    DTIC Science & Technology

    1991-09-30

    Operations Research and Manage - ment Scienc,. \\, I IV Product Planning and Inventory , North-Holland, 19T9. to api- ir 109. F ’. L,.t ’A 2,1 - I l.uwr...7- theory. ignifcan prgrs ha bee mad n th deeomndo fi n sorting circuiits, network management protocols for high speed nertworks, distributed...Significant progress has been made on the development of efficient sorting circuits, network management protocols for high speed networks, distributed graph

  2. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C; Hudson, Douglas L

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of

  3. On Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1983-04-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems

  4. A Different Look at Secure Distributed Computation

    DTIC Science & Technology

    1997-06-01

    9, 12]. Still, the worst-case view dominates the secure computing literature in general and the secure distributed computing literature in...The model we now suggest represents distributed computing as two or more interwoven networks of competing nodes. In 111 1997, pp. 109{115 the

  5. Computer Graphics Simulations of Sampling Distributions.

    ERIC Educational Resources Information Center

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  6. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    SciTech Connect

    Barker, Ashley D.; Bernholdt, David E.; Bland, Arthur S.; Gary, Jeff D.; Hack, James J.; McNally, Stephen T.; Rogers, James H.; Smith, Brian E.; Straatsma, T. P.; Sukumar, Sreenivas Rangan; Thach, Kevin G.; Tichenor, Suzy; Vazhkudai, Sudharshan S.; Wells, Jack C.

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  7. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis.

    PubMed

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-08-18

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.

  8. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis

    PubMed Central

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  9. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  10. National remote computational flight research facility

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.

    1989-01-01

    The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.

  11. Solving the Quadratic Capacitated Facilities Location Problem by Computer.

    ERIC Educational Resources Information Center

    Cote, Leon C.; Smith, Wayland P.

    Several computer programs were developed to solve various versions of the quadratic capacitated facilities location problem. Matrices, which represent various business costs, are defined for the factors of sites, facilities, customers, commodities, and production units. The objective of the program is to find an optimization matrix for the lowest…

  12. Parallel and Distributed Computing Combinatorial Algorithms

    DTIC Science & Technology

    1993-10-01

    FUPNDKC %2,•, PARALLEL AND DISTRIBUTED COMPUTING COMBINATORIAL ALGORITHMS 6. AUTHOR(S) 2304/DS F49620-92-J-0125 DR. LEIGHTON 7 PERFORMING ORGANIZATION NAME...on several problems involving parallel and distributed computing and combinatorial optimization. This research is reported in the numerous papers that...network decom- position. In Proceedings of the Eleventh Annual ACM Symposium on Principles of Distributed Computing , August 1992. [15] B. Awerbuch, B

  13. Modular Programming Techniques for Distributed Computing Tasks

    DTIC Science & Technology

    2004-08-01

    Modular Programming Techniques for Distributed Computing Tasks Anthony Cowley, Hwa-Chow Hsu, Camillo J. Taylor GRASP Laboratory University of...network, distributed computing , software design 1. INTRODUCTION As efforts to field sensor networks, or teams of mobile robots, become more...TITLE AND SUBTITLE Modular Programming Techniques for Distributed Computing Tasks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  14. Distributed Computing Environment for Mine Warfare Command

    DTIC Science & Technology

    1993-06-01

    AD-A268 799 j -•111lllli UliilllI ME ii El UU NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC V4 * cLP i0 1993 RA THESIS DISTRIBUTED COMPUTING ENVIRONMENT...Project No [Task No lWork Unit Accession 1 -1 No 11 Title (include security classification) DISTRIBUTED COMPUTING ENVIRONMENT FOR MINE WARFARE COMMAND 12... DISTRIBUTED COMPUTING ..... .. 26 A. STANDARDS FOR OPEN SYSTEMS ... .......... 27 1. OSI Model .......... ................. 28 2. DOD Model

  15. Distribution of Corbicula fluminea at nuclear facilities

    SciTech Connect

    Counts, C.L. III

    1985-11-01

    A review of the zoogeographic records for the exotic Asian clam, Corbicula fluminea (Muller, 1774), reveals its presence in 27 states where nuclear powered electric generating plants are either operating or under construction. Nineteen plant sites reported infestation of varying severity in facilities, or source water bodies immediately adjacent to the facility, by C. fluminea. Thirteen plant sites are located within the zoogeographic limits of C. fluminea but have a low risk of infestation due to either salt water cooling systems or locations a great distance from known populations. Eighteen plant sites are located wholly outside of the known zoogeographic range of C. fluminea. Thirty plant sites are located in close proximity to known populations of C. fluminea and therefore should maintain surveillance of the source water body and within plant water systems for possible infestations by these bivalves. 27 figs.

  16. Review of Test Facilities for Distributed Energy Resources

    SciTech Connect

    AKHIL,ABBAS ALI; MARNAY,CHRIS; KIPMAN,TIMOTHY

    2003-05-01

    Since initiating research on integration of distributed energy resources (DER) in 1999, the Consortium for Electric Reliability Technology Solutions (CERTS) has been actively assessing and reviewing existing DER test facilities for possible demonstrations of advanced DER system integration concepts. This report is a compendium of information collected by the CERTS team on DER test facilities during this period.

  17. Distributed Sensor Systems and Electromechanical Analog Facility

    DTIC Science & Technology

    1980-01-01

    run*/ 464 CICE 469 Experiment 3 Objective: To study the effect of two simple controllers in controlling a two phase servo motor system. 1. PID...laboratory, the floppy disk controllers , A to D and D to A converter controllers , and the entire computer controlled train system. The software...segment display modules. a free running the converted usec later. The on three The data are transmitted to the data register by the computer whenever

  18. Status of the National Ignition Facility Integrated Computer Control System

    SciTech Connect

    Lagin, L; Bryant, R; Carey, R; Casavant, D; Edwards, O; Ferguson, W; Krammen, J; Larson, D; Lee, A; Ludwigsen, P; Miller, M; Moses, E; Nyholm, R; Reed, R; Shelton, R; Van Arsdall, P J; Wuest, C

    2003-10-13

    The National Ignition Facility (NIF), currently under construction at the Lawrence Livermore National Laboratory, is a stadium-sized facility containing a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for nearly 100 experimental diagnostics. When completed, NIF will be the world's largest and most energetic laser experimental system, providing an international center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. NIF's 192 energetic laser beams will compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. Laser hardware is modularized into line replaceable units such as deformable mirrors, amplifiers, and multi-function sensor packages that are operated by the Integrated Computer Control System (ICCS). ICCS is a layered architecture of 300 front-end processors attached to nearly 60,000 control points and coordinated by supervisor subsystems in the main control room. The functional subsystems--beam control including automatic beam alignment and wavefront correction, laser pulse generation and pre-amplification, diagnostics, pulse power, and timing--implement automated shot control, archive data, and support the actions of fourteen operators at graphic consoles. Object-oriented software development uses a mixed language environment of Ada (for functional controls) and Java (for user interface and database backend). The ICCS distributed software framework uses CORBA to communicate between languages and processors. ICCS software is approximately 3/4 complete with over 750 thousand source lines of code having undergone off-line verification tests and deployed to the facility. NIF has entered the first phases of its laser commissioning program. NIF has now demonstrated the highest energy 1{omega}, 2{omega}, and 3{omega} beamlines in the world. NIF

  19. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  20. Distributed Computing at Belle II

    NASA Astrophysics Data System (ADS)

    Bansal, Vikas; Belle Collaboration, II

    2016-03-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50 ab-1 of e+e- collision data, about 50 times larger than the data set of the earlier Belle experiment. The computing requirements of Belle II are comparable to those of a RUN I high-pT LHC experiment. Computing will make full use of high speed networking and of the Computing Grids in North America, Asia and Europe. Results of an initial MC simulation campaign with 5 ab-1 equivalent luminosity will be described.

  1. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  2. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  3. Biomedical computing facility interface design plan

    NASA Technical Reports Server (NTRS)

    Puckett, R. D.

    1971-01-01

    The results are presented of a design study performed to establish overall system interface requirements for the Biomedical Laboratories Division's Sigma-3 computer system. Emphasis has been placed upon the definition of an overall implementation plan and associated schedule to meet both near-term and long-range requirements within the constraints at available resources.

  4. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  5. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  6. Survey of computer codes applicable to waste facility performance evaluations

    SciTech Connect

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs.

  7. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  8. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.

  9. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  10. Decentralized Resource Management in Distributed Computer Systems.

    DTIC Science & Technology

    1982-02-01

    Archons project, which is performing research in the science and eigneering of ’uhet we -term- distributed computersa. By this we mean a computer...Classification of Synchronization Techniques 23 3.2.1 Access Synchronization 23 3.2.2 coordinating Synchronization 25 3.2.3 Meta.synchronization 26 3.3...3.4 Access Synchronization Techniques 29 3.4.1 Access Synchronization in Shared Memory Computer System 30 3.4.2 Concepts and Issues in Distributed

  11. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC`s perspective was ``to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.`` This translated into evaluating how easy it was to port ELROS over CRI`s ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC`s side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI`s goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  12. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC's perspective was to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.'' This translated into evaluating how easy it was to port ELROS over CRI's ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC's side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI's goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  13. Distributed computing testbed for a remote experimental environment

    SciTech Connect

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.; Greenwood, D.E.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.

  14. Expanding the Scope of High-Performance Computing Facilities

    SciTech Connect

    Uram, Thomas D.; Papka, Michael E.

    2016-05-01

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  15. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  16. GRIMD: distributed computing for chemists and biologists

    PubMed Central

    Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe

    2014-01-01

    Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326

  17. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  18. Distribution analysis of airborne nicotine concentrations in hospitality facilities.

    PubMed

    Schorp, Matthias K; Leyden, Donald E

    2002-02-01

    A number of publications report statistical summaries for environmental tobacco smoke (ETS) concentrations. Despite compelling evidence for the data not being normally distributed, these publications typically report the arithmetic mean and standard deviation of the data, thereby losing important information related to the distribution of values contained in the original data. We were interested in the frequency distributions of reported nicotine concentrations in hospitality environments and subjected available data to distribution analyses. The distribution of experimental indoor airborne nicotine concentration data taken from hospitality facilities worldwide was fit to lognormal, Weibull, exponential, Pearson (Type V), logistic, and loglogistic distribution models. Comparison of goodness of fit (GOF) parameters and indications from the literature verified the selection of a lognormal distribution as the overall best model. When individual data were not reported in the literature, statistical summaries of results were used to model sets of lognormally distributed data that are intended to mimic the original data distribution. Grouping the data into various categories led to 31 frequency distributions that were further interpreted. The median values in nonsmoking environments are about half of the median values in smoking sections. When different continents are compared, Asian, European, and North American median values in restaurants are about a factor of three below levels encountered in other hospitality facilities. On a comparison of nicotine concentrations in North American smoking sections and nonsmoking sections, median values are about one-third of the European levels. The results obtained may be used to address issues related to exposure to ETS in the hospitality sector.

  19. The impact of distributed computing on education

    NASA Technical Reports Server (NTRS)

    Utku, S.; Lestingi, J.; Salama, M.

    1982-01-01

    In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.

  20. The impact of distributed computing on education

    NASA Technical Reports Server (NTRS)

    Utku, S.; Lestingi, J.; Salama, M.

    1982-01-01

    In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.

  1. Concept of SPDS integrated into Distributed Computer System (DCS)

    SciTech Connect

    Anikanov, S. S.

    2006-07-01

    Implementation of the Safety Parameter Display System (SPDS) during the NPP modernization activities or for the new plant imposes certain requirements on the system design. In many cases, such SPDS system functionality is integrated into the non-safety part of the Distributed Computer System (DCS). The SPDS becomes organically embedded into the major I and C hardware and application software. However, from the licensing perspective, SPDS shall be designed as a functional entity which satisfies industry standards and as such imposes requirements to the other plant MMI systems. 'Other MMI systems' that are used to support the operating staff during normal, abnormal and emergency plant conditions include Main Control Room Workstations, Shared Wall Panel Display (WPD), and other information systems. The SPDS resources, used to address the system requirements, also include the Emergency Response Facilities (TSC, Emergency on-site Facilities, and Emergency off-site Facilities). (authors)

  2. Computation and Analysis of the Global Distribution of the Radioxenon Isotope 133Xe based on Emissions from Nuclear Power Plants and Radioisotope Production Facilities and its Relevance for the Verification of the Nuclear-Test-Ban Treaty

    NASA Astrophysics Data System (ADS)

    Wotawa, Gerhard; Becker, Andreas; Kalinowski, Martin; Saey, Paul; Tuma, Matthias; Zähringer, Matthias

    2010-05-01

    Monitoring of radioactive noble gases, in particular xenon isotopes, is a crucial element of the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The capability of the noble gas network, which is currently under construction, to detect signals from a nuclear explosion critically depends on the background created by other sources. Therefore, the global distribution of these isotopes based on emissions and transport patterns needs to be understood. A significant xenon background exists in the reactor regions of North America, Europe and Asia. An emission inventory of the four relevant xenon isotopes has recently been created, which specifies source terms for each power plant. As the major emitters of xenon isotopes worldwide, a few medical radioisotope production facilities have been recently identified, in particular the facilities in Chalk River (Canada), Fleurus (Belgium), Pelindaba (South Africa) and Petten (Netherlands). Emissions from these sites are expected to exceed those of the other sources by orders of magnitude. In this study, emphasis is put on 133Xe, which is the most prevalent xenon isotope. First, based on the emissions known, the resulting 133Xe concentration levels at all noble gas stations of the final CTBT verification network were calculated and found to be consistent with observations. Second, it turned out that emissions from the radioisotope facilities can explain a number of observed peaks, meaning that atmospheric transport modelling is an important tool for the categorization of measurements. Third, it became evident that Nuclear Power Plant emissions are more difficult to treat in the models, since their temporal variation is high and not generally reported. Fourth, there are indications that the assumed annual emissions may be underestimated by factors of two to ten, while the general emission patterns seem to be well understood. Finally, it became evident that 133Xe sources mainly influence the sensitivity of the

  3. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  4. Cryogenic distribution for the Facility for Rare Isotope Beams

    SciTech Connect

    S. Jones, Dana Arenius, Adam Fila, P. Geutschow, Helmut Laumer, Matt Johnson, Cory S. Waltz, J. G. Weisend II

    2012-06-01

    The Facility for Rare Isotope Beams (FRIB) is a new National User Facility for nuclear science funded by the Department of Energy Office of Science and operated by Michigan State University. The FRIB accelerator linac consists of superconducting radio-frequency (SCRF) cavities operating at 2 K and SC magnets operating at 4.5 K all cooled by a large scale cryogenic refrigeration system. A major subsystem of the cryogenic system will be the distribution system whose primary components will include a distribution box, the transfer lines and the interconnect valve boxes at each cryogenic device. An overview of the conceptual design of the distribution system including engineering details, capabilities and schedule is described.

  5. Pattern recognition and massively distributed computing.

    PubMed

    Davies, E Keith; Glick, Meir; Harrison, Karl N; Richards, W Graham

    2002-12-01

    A feature of Peter Kollman's research was his exploitation of the latest computational techniques to devise novel applications of the free energy perturbation method. He would certainly have seized upon the opportunities offered by massively distributed computing. Here we describe the use of over a million personal computers to perform virtual screening of 3.5 billion druglike molecules against protein targets by pharmacophore pattern matching, together with other applications of pattern recognition such as docking ligands without any a priori knowledge about the binding site location.

  6. Sandia Laboratories hybrid computer and motion simulator facilities

    SciTech Connect

    Curry, W. H.; French, R. E.

    1980-05-01

    Hybrid computer and motion simulator facilities at Sandia National Laboratories include an AD/FIVE-AD10-PDP11/60, an AD/FIVE-PDP11/45, an EAI7800-EAI640, an EAI580/TR48-Nova 800, and two Carco S-45OR-3/R-493A three-axis motion simulators. An EAI680 is used in the analog mode only. This report describes the current equipment.

  7. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  8. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    SciTech Connect

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  9. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  10. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  11. Distributed Computing: Options in the Eighties.

    ERIC Educational Resources Information Center

    Klingenstein, Kenneth; Devine, Gary D.

    1985-01-01

    University administrative data processing is moving toward a more distributed environment. An architecture must be established that incorporates central sites, campus centers, and end users in a networked pool of computer systems, with applications located at appropriate nodes in the network. (Author/MLW)

  12. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  13. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  14. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  15. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  16. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  17. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  18. Distributed Data Mining using a Public Resource Computing Framework

    NASA Astrophysics Data System (ADS)

    Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

    The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

  19. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  20. BES-III distributed computing status

    NASA Astrophysics Data System (ADS)

    Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.

    2016-09-01

    The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.

  1. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  2. Low Power Computing in Distributed Systems

    DTIC Science & Technology

    2006-04-01

    IEEE Communications Magazine, Volume 40, Issue 8, pp. 102-114, Aug. 2002. [3] E. R . Post and M. Orth, “Smart Fabric, or Wearable Computing,” Proc...www.cse.psu.edu/~mdl/software.htm [20] http://carlsberg.mit.edu/JouleTrack/ [21] M. Srivastava, A. Chandrakasan. R . Brodersen, “Predictive system shutdown...Dynamic Load Balancing in Distributed Systems,” IEEE International Conference on Systems, Man and Cybernetics, pp. 3795-3799, 1995. [27] A. Talukder

  3. Uninstrumented assembly airflow testing in the Annular Flow Distribution facility

    SciTech Connect

    Kielpinski, A.L.

    1992-02-01

    During the Emergency Cooling System phase of a postulated large-break loss of coolant accident (ECS-LOCA), air enters the primary loop and is pumped down the reactor assemblies. One of the experiments performed to support the analysis of this accident was the Annular Flow Distribution (AFD) experiment, conducted in a facility built for this purpose at Babcock and Wilcox Alliance Research Center in Alliance, Ohio. As part of this experiment, a large body of airflow data were acquired in a prototypical mockup of the Mark 22 reactor assembly. This assembly was known as the AFD (or the I-AFD here) reference assembly. The I-AFD assembly was fully prototypical, having been manufactured in SRS`s production fabrication facility. Similar Mark 22 mockup assemblies were tested in several test facilities in the SRS Heat Transfer Laboratory (HTL). Discrepancies were found. The present report documents further work done to address the discrepancy in airflow measurements between the AFD facility and HTL facilities. The primary purpose of this report is to disseminate the data from the U-AFD test, and to compare these test results to the I-AFD data and the U-AT data. A summary table of the test data and the B&W data transmittal letter are included as an attachment to this report. The full data transmittal volume from B&W (including time plots of the various instruments) is included as an appendix to this report. These data are further analyzed by comparing them to two other HTL tests, namely, SPRIHTE 1 and the Single Assembly Test Stand (SATS).

  4. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  5. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  6. Computer Controlled Automatic Test Facility For Fiber Optic Transmission Systems

    NASA Astrophysics Data System (ADS)

    Goddard, G. W.; Jemczyk, I. D.; Mondor, D. R.

    1983-03-01

    A computer controlled automated test facility has been developed by Bell-Northern Research for the laboratory evaluation of fiber-optic digital transmission equipment over a range of environmental, electrical and optical stress conditions. The system, named Fiber Optic System Test (FROST), is currently used to verify the design integrity and performance of short wavelength (850 nm) fiber-optic transmission equipment operating at the DS-2 (6.312 Mb/s) and DS-3 (44.736 Mb/s) rates in the digital hierarchy. It can also test equip-ment operating at the DS-1 (1.544 Mb/s) rate. This paper presents the basic system design, describes the implementation and outlines the capabilities of the system. The automated test system has provided data on the equipment being tested which supplemented and expanded information obtained from system trials carried out under field conditions. It permits the rapid verification of improvements in equipment design and enables tests to be carried out on several systems simultaneously, which would be time consuming and expensive if undertaken using manual control. The effectiveness of the test program using the FROST facility has led to the system being enhanced to accommodate long wavelength fiber-optic digital transmission systems. It also has potential applications as a Computer Aided Manufacturing tool.

  7. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  8. The Argonne Leadership Computing Facility 2010 annual report.

    SciTech Connect

    Drugan, C.

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale

  9. Pseudo-interactive monitoring in distributed computing

    SciTech Connect

    Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  10. Pseudo-interactive monitoring in distributed computing

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2010-04-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  11. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... HUMAN SERVICES Food and Drug Administration Guidance for Industry: Blood Establishment Computer System... ``Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April... establishment computer system validation program, consistent with recognized principles of software validation...

  12. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  13. A Hundred Impossibility Proofs for Distributed Computing

    DTIC Science & Technology

    1989-08-01

    distributed computing . In this category, I include not just results that say that a particular task cannot be accomplished, but also lower bound results, which say that a task cannot be accomplished within a certain bound on cost. I started out with a simple plan for preparing this talk: I would spend a couple of weeks reading all the impossibility proofs in our fields, and would categorize them according to the ideas used. Then I would make wise and general observations, and try to predict where the future of this area is headed. That turned out to be a bit too ambitious;

  14. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  15. Adaptive file allocation in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Mahmood, A.; Khan, H. U.; Fatmi, H. A.

    1994-12-01

    An algorithm to dynamically reallocate the database files in a computer network is presented. The proposed algorithm uses the best fit approach to allocate and delete beneficial file copies. A key problem of economical estimation of future access and update pattern is discussed and an algorithm based on the Gabor-Kolmogorov learning process is presented to estimate the access and the update patterns. A distributed candidate selection algorithm is presented to reduce the number of files and nodes in reallocation. The simulation results are presented to demonstrate both accuracy and efficiency of the proposed algorithms.

  16. A Generalized Management Information System for Computer Facilities at Educational Institutions.

    ERIC Educational Resources Information Center

    Bowman, Patrick Awalt

    The problem of managing computer facilities at educational institutions is examined. User categories are defined, and the interrelations between user requirements and the goals/objectives of the facility are discussed. Enumerations of the factors that influence computer facility operations is also accomplished. In addition, management information…

  17. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  18. Shipboard Application of a Ring Structured Distributed Computing System.

    DTIC Science & Technology

    Considerable research is currently going on into the application of distributed computing systems. They appear particularly suitable for the...structured distributed computing system might be adapted to function in this environment. Included in this consideration are the feasibility of

  19. Development of a Defence Distributed Computing Environment (DCE) Database Demonstrator,

    DTIC Science & Technology

    1995-11-01

    This report discusses the development of a Defence Distributed Computing Environment (DCE) database demonstrator program. The Demonstrator program...showcases the interoperability, portability, survivability and security features of Open Software Foundation’s Distributed Computing Environment.

  20. Models and Measurements of Parallelism for a Distributed Computer System.

    DTIC Science & Technology

    1982-01-01

    that parallel execution of the processes comprising an application program will defray U the overhead costs of distributed computing . This...of Different Approaches to Distributed Computing ", Proceedings of the Ist International Conference on Distributed Comput er Systems, Huntsville, AL...Oct. 1-5, 1979), pp. 222-232. [20] Liskov, B., "Primitives for Distributed Computing ", Froceedings of the 7--th Symposium on Operating System

  1. Testing the CDF distributed computing framework

    SciTech Connect

    Bartsch, Valeria; Baranovski, Andrew; Belforte, Stefano; Burgon-Lyon, Morag; Garzoglio, Gabriele; Herber, Randolph; Illingworth, Robert; Kennedy, Rob; Kerzel, Ulrich; Kreymer, Art; Leslie, Matt; Loebel-Carpenter, Lauri; Lueking, Lee; Lyon, Adam; Merritt, Wyatt; Ratnikov, Fedor; Sill, Alan; St. Denis, Richard; Stonjek, Stefan; Terekhov, Igor; Trumbo, Julie; /Fermilab /Oxford U. /INFN, Trieste /Glasgow U. /Karlsruhe U. /Rutgers U., Piscataway /Texas Tech.

    2004-12-01

    A major source of CPU power for CDF (Collider Detector at Fermilab) is the CAF (Central Analysis Farm) [1] at Fermilab. The CAF is a farm of computers running Linux with access to the CDF data handling system and databases to allow CDF collaborators to run batch analysis jobs. Beside providing CPU power it has a good monitoring tool. The CAF software is a wrapper around a batch system, either fbsng [3] or condor, to submit jobs in a uniform way. So the submission to the CAF clusters inside and outside Fermilab from many computers with kerberos authentification is possible. It is mainly used to access datasets which comprise a large amount of files and analyze the data. Up to now the DCache system has been used to access the files. In autumn 2004 some of the important datasets will only be readable with the help of the data handling system SAM (Sequential Access to data via Metadata) [2]. This will be done in order to switch to using only one data handling system at Fermilab and on the remote sites. SAM has been used in run II to store, manage, deliver and track the processing of all data. It is designed to copy data to remote sites with remote analysis in mind. To prove CAF and SAM could provide the required CPU power and Data Handling, stress tests of the combined system were carried out. A second goal of CDF is to distribute computing. In 2005 50% of the computing shall be located outside of Fermilab. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) in combination with SAM. To achieve user friendliness the SAM station environment has to be common to all stations and adaptations to the environment have to be made.

  2. An Applet-based Anonymous Distributed Computing System.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  3. Large Distributed Data Acquisition System at the Z Facility

    SciTech Connect

    Mills, Jerry A.; Potter, James E.

    1999-06-15

    Experiments at the Z machine generate over four hundred channels of waveform data on each accelerator shot. Most experiments require timing accuracy to better than one nanosecond between multiple distributed recording locations throughout the facility. Experimental diagnostics and high speed data recording equipment are typically located within a few meters of the 200 to 300 terawatt X- ray source produced during Z-pinch experiments. This paper will discuss techniques used to resolve the timing of the several hundred data channels acquired on each shot event and system features which allow viewing of waveforms within a few minutes after a shot. Methods for acquiring high bandwidth signals in a severe noise environment will also be discussed.

  4. Equilibrium distribution from distributed computing (simulations of protein folding).

    PubMed

    Scalco, Riccardo; Caflisch, Amedeo

    2011-05-19

    Multiple independent molecular dynamics (MD) simulations are often carried out starting from a single protein structure or a set of conformations that do not correspond to a thermodynamic ensemble. Therefore, a significant statistical bias is usually present in the Markov state model generated by simply combining the whole MD sampling into a network whose nodes and links are clusters of snapshots and transitions between them, respectively. Here, we introduce a depth-first search algorithm to extract from the whole conformation space network the largest ergodic component, i.e., the subset of nodes of the network whose transition matrix corresponds to an ergodic Markov chain. For multiple short MD simulations of a globular protein (as in distributed computing), the steady state, i.e., stationary distribution determined using the largest ergodic component, yields more accurate free energy profiles and mean first passage times than the original network or the ergodic network obtained by imposing detailed balance by means of symmetrization of the transition counts.

  5. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  6. LHCbDirac: distributed computing in LHCb

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, P.; Graciani, R.; Tsaregorodtsev, A.; Closier, J.; Mathe, Z.; Ubeda, M.; Zhelezov, A.; Lanciotti, E.; Romanovskiy, V.; Ciba, K. D.; Casajus, A.; Roiser, S.; Sapunov, M.; Remenska, D.; Bernardoff, V.; Santana, R.; Nandakumar, R.

    2012-12-01

    We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specific cases. LHCbDirac is an actively developed extension, implementing the LHCb computing model and workflows handling all the distributed computing activities of LHCb. Such activities include real data processing (reconstruction, stripping and streaming), Monte-Carlo simulation and data replication. Other activities are groups and user analysis, data management, resources management and monitoring, data provenance, accounting for user and production jobs. LHCbDirac also provides extensions of the DIRAC interfaces, including a secure web client, python APIs and CLIs. Before putting in production a new release, a number of certification tests are run in a dedicated setup. This contribution highlights the versatility of the system, also presenting the experience with real data processing, data and resources management, monitoring for activities and resources.

  7. Computation-distributed probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Zhao, Lingling; Su, Xiaohong; Shi, Chunmei; Ma, JiQuan

    2016-12-01

    Particle probability hypothesis density filtering has become a promising approach for multi-target tracking due to its capability of handling an unknown and time-varying number of targets in a nonlinear, non-Gaussian system. However, its computational complexity linearly increases with the number of obtained observations and the number of particles, which can be very time consuming, particularly when numerous targets and clutter exist in the surveillance region. To address this issue, we present a distributed computation particle probability hypothesis density(PHD) filter for target tracking. It runs several local decomposed particle PHD filters in parallel while processing elements. Each processing element takes responsibility for a portion of particles but all measurements and provides local estimates. A central unit controls particle exchange among the processing elements and specifies a fusion rule to match and fuse the estimates from different local filters. The proposed framework is suitable for parallel implementation. Simulations verify that the proposed method can significantly accelerate and maintain a comparative accuracy compared to the standard particle PHD filter.

  8. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  9. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  10. Space power distribution system technology. Volume 3: Test facility design

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Cannady, M. D.; Cassinelli, J. E.; Farber, B. F.; Lurie, C.; Fleck, G. W.; Lepisto, J. W.; Messner, A.; Ritterman, P. F.

    1983-01-01

    The AMPS test facility is a major tool in the attainment of more economical space power. The ultimate goals of the test facility, its primary functional requirements and conceptual design, and the major equipment it contains are discussed.

  11. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and... SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.503 Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply distribution...

  12. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and... SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.503 Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply distribution...

  13. A distributed data acquisition system for aeronautics test facilities

    NASA Technical Reports Server (NTRS)

    Fronek, Dennis L.; Setter, Robert N.; Blumenthal, Philip Z.; Smalley, Robert R.

    1987-01-01

    The NASA Lewis Research Center is in the process of installing a new data acquisition and display system. This new system will provide small and medium sized aeronautics test facilities with a state-of-the-art real-time data acquisition and display system. The new data system will provide for the acquisition of signals from a variety of instrumentation sources. They include analog measurements of temperatures, pressures, and other steady state voltage inputs; frequency inputs to measure speed and flow; discrete I/O for significant events, and modular instrument systems such as multiplexed pressure modules or electronic instrumentation with a IEEE 488 interface. The data system is designed to acquire data, convert it to engineering units, compute test dependent performance calculations, limit check selected channels or calculations, and display the information in alphanumeric or graphical form with a cycle time of one second for the alphanumeric data. This paper describes the system configuration, its salient features, and the expected impact on testing.

  14. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  15. Microscale air quality impacts of distributed power generation facilities.

    PubMed

    Olaguer, Eduardo P; Knipping, Eladio; Shaw, Stephanie; Ravindran, Satish

    2016-08-01

    The electric system is experiencing rapid growth in the adoption of a mix of distributed renewable and fossil fuel sources, along with increasing amounts of off-grid generation. New operational regimes may have unforeseen consequences for air quality. A three-dimensional microscale chemical transport model (CTM) driven by an urban wind model was used to assess gaseous air pollutant and particulate matter (PM) impacts within ~10 km of fossil-fueled distributed power generation (DG) facilities during the early afternoon of a typical summer day in Houston, TX. Three types of DG scenarios were considered in the presence of motor vehicle emissions and a realistic urban canopy: (1) a 25-MW natural gas turbine operating at steady state in either simple cycle or combined heating and power (CHP) mode; (2) a 25-MW simple cycle gas turbine undergoing a cold startup with either moderate or enhanced formaldehyde emissions; and (3) a data center generating 10 MW of emergency power with either diesel or natural gas-fired backup generators (BUGs) without pollution controls. Simulations of criteria pollutants (NO2, CO, O3, PM) and the toxic pollutant, formaldehyde (HCHO), were conducted assuming a 2-hr operational time period. In all cases, NOx titration dominated ozone production near the source. The turbine scenarios did not result in ambient concentration enhancements significantly exceeding 1 ppbv for gaseous pollutants or over 1 µg/m(3) for PM after 2 hr of emission, assuming realistic plume rise. In the case of the datacenter with diesel BUGs, ambient NO2 concentrations were enhanced by 10-50 ppbv within 2 km downwind of the source, while maximum PM impacts in the immediate vicinity of the datacenter were less than 5 µg/m(3). Plausible scenarios of distributed fossil generation consistent with the electricity grid's transformation to a more flexible and modernized system suggest that a substantial amount of deployment would be required to significantly affect air quality on

  16. An Evaluation of Spatial Distribution of Public Parking Facilities in Huizhou Downtown

    NASA Astrophysics Data System (ADS)

    Chen, Jiasheng; Bai, Yang; Chen, Ying

    2016-11-01

    The survey and evaluation of existing public parking facilities were carried out, which had important practical significance to resolve conflicts over demand and supply of parking facilities. Taking Huizhou downtown as a study area, we surveyed parking facilities mainly by daily observing and recording. Parking facilities supply, characteristics, and demand were analysed by calculating parking utilization and turnover rate. Based on GIS, the distance-based and time-based accessibility of parking facilities were analysed to evaluate spatial distribution. The results indicated that a large spatial difference in supply and characteristics of parking was shown in public parking facilities of Huizhou downtown and that the parking demand was large. Furthermore, there existed imbalance in spatial distribution of parking facilities in Huizhou downtown area. Our study suggested that it was imbalanced and irrational between parking facilities supply and parking demand, that the planning of parking facilities was inadequate and that management system was incomplete.

  17. Improvement of the Computing - Related Procurement Process at a Government Research Facility

    SciTech Connect

    Gittins, C.

    2000-04-03

    The purpose of the project was to develop, implement, and market value-added services through the Computing Resource Center in an effort to streamline computing-related procurement processes across the Lawrence Livermore National Laboratory (LLNL). The power of the project was in focusing attention on and value of centralizing the delivery of computer related products and services to the institution. The project required a plan and marketing strategy that would drive attention to the facility's value-added offerings and services. A significant outcome of the project has been the change in the CRC internal organization. The realignment of internal policies and practices, together with additions to its product and service offerings has brought an increased focus to the facility. This movement from a small, fractious organization into one that is still small yet well organized and focused on its mission and goals has been a significant transition. Indicative of this turnaround was the sharing of information. One-on-one and small group meetings, together with statistics showing work activity was invaluable in gaining support for more equitable workload distribution, and the removal of blame and finger pointing. Sharing monthly reports on sales and operating costs also had a positive impact.

  18. The ICAAP Project, Part Three: OSF Distributed Computing Environment.

    ERIC Educational Resources Information Center

    Cantor, Scott

    1997-01-01

    DCE (Distributed Computing Environment) is a collection of services, tools, and libraries for building the infrastructure necessary for distributed computing within an enterprise. This articles discusses the Open Software Foundation (OSF); the components of DCE, including the Directory and Security Services, the Distributed Time Service, and the…

  19. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  20. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  1. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  2. Facility optimization to improve activation rate distributions during IVNAA.

    PubMed

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-05-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body.

  3. Facility optimization to improve activation rate distributions during IVNAA

    PubMed Central

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-01-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body. PMID:23386375

  4. Concept for a distributed processor computer

    NASA Technical Reports Server (NTRS)

    Bogue, P. N.; Burnett, G. J.; Koczela, L. J.

    1970-01-01

    Future generation computer utilizes cell of single metal oxide semiconductor wafer containing general purpose processor section and small memory of approximately 512 words of 16 bits each. Cells are organized into groups and groups interconnected to form computer.

  5. Hybrid computer technique yields random signal probability distributions

    NASA Technical Reports Server (NTRS)

    Cameron, W. D.

    1965-01-01

    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  6. Development of Distributed Computing Systems Software Design Methodologies.

    DTIC Science & Technology

    1982-11-05

    R12i 941 DEVELOPMENT OF DISTRIBUTED COMPUTING SYSTEMS SOFTWARE ± DESIGN METHODOLOGIES(U) NORTHWESTERN UNIV EVANSTON IL DEPT OF ELECTRICAL...GUIRWAU OF STANDARDS -16 5 A Ax u FINAL REPORT Development of Distributed Computing System Software Design Methodologies C)0 Stephen S. Yau September 22...of Distributed Computing Systems Software pt.22,, 80 -OJu1, 2 * Dsig Mehodloges PERFORMING ORG REPORT NUMBERDesign th ol ies" 7. AUTHOR() .. CONTRACT

  7. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  8. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  9. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  10. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  11. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  12. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  13. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  14. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distribution of aviation facility licenses. 766.8 Section 766.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES USE OF DEPARTMENT OF THE NAVY AVIATION FACILITIES BY CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation...

  15. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  16. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  17. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  18. Distributed Computing: Considerations for Its Use within Educational Environments.

    ERIC Educational Resources Information Center

    Pratt, S. J.

    1985-01-01

    Emphasizing more effective use of existing equipment, this article highlights distributed computing design considerations applicable to educational environments; identifies potential roles of networking in the provision of adequate teaching aids; presents a networking model; and describes the development of a distributed computing configuration at…

  19. Distributed Computing Environment: An Architecture For Supporting Change?

    DTIC Science & Technology

    1995-11-01

    Distributed Computing Environment (DCE) has been in development for about five years but has only been widely used in the last two years. It consists...these services form an architecture for distributed computing that enables users to carry out the new, cheaper operations they require with the

  20. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  1. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  2. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  3. Configuring computation tree topologies on a distributed computing system

    SciTech Connect

    Woei Lin; Chuan-lin Wu

    1983-01-01

    The authors describe an approach to connecting hardware resources for high-performance computation. Two basic algorithms are designed for configuring binary tree topologies. The configuring command can be issued from any processing mode. The algorithms can select proper modes for connection while maintaining good utilization of processing nodes. 7 references.

  4. Computation of the sampling distribution of coherence estimate.

    PubMed

    Nadarajah, Saralees; Kotz, Samuel

    2006-12-01

    The recent paper published by Miranda de Sa (2004) derived, for the first time, the sampling distribution of coherence estimated between two signals. The paper also considered computational issues of the sampling distribution, using an approximate method. In this short note, we provided several 1-line programs for the exact computation of various measures of the sampling distribution. The advantages of using these programs are discussed.

  5. An optimization model for energy generation and distribution in a dynamic facility

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1981-01-01

    An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.

  6. Inspection of composites using a computer-based real-time radiographic facility

    NASA Technical Reports Server (NTRS)

    Roberts, E., Jr.; Vary, A.

    1976-01-01

    A radiographic inspection facility was developed at the NASA Lewis Research Center. The facility uses a digital computer to provide enhanced images in near real-time. Some capabilities of the facility are demonstrated in the inspection of a fan frame ring for an experimental aircraft gas turbine. The ring was fabricated from a carbon-fiber-reinforced epoxy composite material. Inspection procedures were evaluated, and comparisons were made with an ultrasonic C-scan and conventional film X-ray.

  7. Data distribution in the NBS Automated Manufacturing Research Facility

    NASA Technical Reports Server (NTRS)

    Mitchell, M. J.; Barkmeyer, E. J.

    1984-01-01

    The Automated Manufacturing Research Facility (AMRF) at the National Bureau of Standards was constructed as a testbed for research in the automation of small batch maufacturing, in particular for systems producing machined parts in lots of 1000 or less. Potential standard interfaces between existing and future components of small batch of factory floor metrology in an automated environment, delivering proven measurement techniques and standard reference materails industry to are identified. Commercially available product are used to construct the facility to expedite transfer of research results into the private sector.

  8. [Use of personal computers in forensic medicine facilities].

    PubMed

    Vorel, F

    1995-08-01

    The authors present a brief account of possibilities to use computers, type PC, in departments of forensic medicine and discuss basic technical and programme equipment. In the author's opinion the basic reason for using computers is to create an extensive database of post-mortem findings which would make it possible to process them on a large scale and use them for research and prevention. Introduction of computers depends on the management of the department and it is necessary to persuade workers-future users of computers-of the advantages associated with their use.

  9. Distributing Computer Resources in Education and Training.

    ERIC Educational Resources Information Center

    Bell, Wynne

    1982-01-01

    The future direction of computers in educational settings is the topic of speculation. It is noted that resources in education are so meagre that only practical ventures can be considered. Suggestions are made for stretching available resources and maximizing the benefits to be gained through the new technology. (MP)

  10. Distributing Computer Resources in Education and Training.

    ERIC Educational Resources Information Center

    Bell, Wynne

    1982-01-01

    The future direction of computers in educational settings is the topic of speculation. It is noted that resources in education are so meagre that only practical ventures can be considered. Suggestions are made for stretching available resources and maximizing the benefits to be gained through the new technology. (MP)

  11. Distributed metadata in a high performance computing environment

    DOEpatents

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  12. Language Facilities for Programming User-Computer Dialogues.

    ERIC Educational Resources Information Center

    Lafuente, J. M.; Gries, D.

    1978-01-01

    Proposes extensions to PASCAL that provide for programing man-computer dialogues. An interactive dialogue application program is viewed as a sequence of frames and separate computational steps. PASCAL extensions allow the description of the items of information in each frame and the inclusion of behavior rules specifying the interactive dialogue.…

  13. Effects of wind-energy facilities on grassland bird distributions

    USGS Publications Warehouse

    Shaffer, Jill A.; Buhl, Deb

    2016-01-01

    The contribution of renewable energy to meet worldwide demand continues to grow. Wind energy is one of the fastest growing renewable sectors, but new wind facilities are often placed in prime wildlife habitat. Long-term studies that incorporate a rigorous statistical design to evaluate the effects of wind facilities on wildlife are rare. We conducted a before-after-control-impact (BACI) assessment to determine if wind facilities placed in native mixed-grass prairies displaced breeding grassland birds. During 2003–2012, we monitored changes in bird density in 3 study areas in North Dakota and South Dakota (U.S.A.). We examined whether displacement or attraction occurred 1 year after construction (immediate effect) and the average displacement or attraction 2–5 years after construction (delayed effect). We tested for these effects overall and within distance bands of 100, 200, 300, and >300 m from turbines. We observed displacement for 7 of 9 species. One species was unaffected by wind facilities and one species exhibited attraction. Displacement and attraction generally occurred within 100 m and often extended up to 300 m. In a few instances, displacement extended beyond 300 m. Displacement and attraction occurred 1 year after construction and persisted at least 5 years. Our research provides a framework for applying a BACI design to displacement studies and highlights the erroneous conclusions that can be made without the benefit of adopting such a design. More broadly, species-specific behaviors can be used to inform management decisions about turbine placement and the potential impact to individual species. Additionally, the avoidance distance metrics we estimated can facilitate future development of models evaluating impacts of wind facilities under differing land-use scenarios.

  14. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  15. SETI@home, BOINC, and Volunteer Distributed Computing

    NASA Astrophysics Data System (ADS)

    Korpela, Eric J.

    2012-05-01

    Volunteer computing, also known as public-resource computing, is a form of distributed computing that relies on members of the public donating the processing power, Internet connection, and storage capabilities of their home computers. Projects that utilize this mode of distributed computation can potentially access millions of Internet-attached central processing units (CPUs) that provide PFLOPS (thousands of trillions of floating-point operations per second) of processing power. In addition, these projects can access the talents of the volunteers themselves. Projects span a wide variety of domains including astronomy, biochemistry, climatology, physics, and mathematics. This review provides an introduction to volunteer computing and some of the difficulties involved in its implementation. I describe the dominant infrastructure for volunteer computing in some depth and provide descriptions of a small number of projects as an illustration of the variety of projects that can be undertaken.

  16. Status Of The National Ignition Campaign And National Ignition Facility Integrated Computer Control System

    SciTech Connect

    Lagin, L; Brunton, G; Carey, R; Demaret, R; Fisher, J; Fishler, B; Ludwigsen, P; Marshall, C; Reed, R; Shelton, R; Townsend, S

    2011-03-18

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility that will contains a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn. NIF is operated by the Integrated Computer Control System (ICCS) in an object-oriented, CORBA-based system distributed among over 1800 frontend processors, embedded controllers and supervisory servers. In the fall of 2010, a set of experiments began with deuterium and tritium filled targets as part of the National Ignition Campaign (NIC). At present, all 192 laser beams routinely fire to target chamber center to conduct fusion and high energy density experiments. During the past year, the control system was expanded to include automation of cryogenic target system and over 20 diagnostic systems to support fusion experiments were deployed and utilized in experiments in the past year. This talk discusses the current status of the NIC and the plan for controls and information systems to support these experiments on the path to ignition.

  17. The Design Methodology of Distributed Computer Systems.

    DTIC Science & Technology

    1980-12-01

    This remedies most of the drawbacks of the ccntralized approach . However, due to the inherent communication delay, the chosen control node may get an...alternative approach is the bayesian approach advocated by Littlewood -31 - (LIT 79(B)). Here we postulate a prior distribution for each of 1, 2, .. j- Then...sses A-i Chapter 2 describes top-down deve lopment approach . The development process is pdivided into four successive phases; (1) requirement, and

  18. Simulation model of load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  19. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  20. A Computability Theory for Distributed Systems.

    DTIC Science & Technology

    1986-03-13

    the following two elementary properties. * [p] is an equivalence relation over system computations. * For z a prefix of V, there is an event on p...assumption. C1 CO ’ We note that the two conditions in the last sentence of the theorem are not exclusive. Con-Obserration I. Any occurrence of "P" in a...Basic Tense Logic, in D. Gab- 10~th POPL (1983) 141-1,54. bay and F. Guenthner (eds.) Handbook of [MP31 Manna, Z., Pnueli, A. - Verification of Con

  1. EFFECTS OF MIXING AND AGING ON WATER QUALITY IN DISTRIBUTION SYSTEM STORAGE FACILITIES

    EPA Science Inventory

    Aging of water in distribution system storage facilities can lead to deterioration of the water quality due to loss of disinfectant residual and bacterial regrowth. Facilities should be operated to insure that the age of the water is not excessive taking into account the quality...

  2. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  3. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  4. 2-out-of-3 selecting facility in a 3-computer system

    SciTech Connect

    Uebel, H.

    1986-10-07

    This patent describes a 2-out-of-3- selecting facility for a 3-computer system in which all computers process the same information in parallel, and in which a result is delivered for further processing only if at least two of the computers have arrived at this result. The facility comprises an output port (A1...A3) in each of the computers and a comparison data input port (E1...E3) in each of the computers which is connected to the output ports of the other two computers for the transfer of the results produced by these computers. It comprises comparison circuitry in each of the computers for comparing the result produced by it with the results produced by the neighboring computers and providing a corresponding comparison indication, to a majority voting circuit (MS). The circuit is connected to receive the comparison indication depending on the comparison indications from all computers. The computers have outlet ports and the facility has two separate data output channels.

  5. Gasoline Distribution Facilities (Bulk Gasoline Terminals and Pipeline Breakout Stations) Air Toxics Rule Fact Sheets

    EPA Pesticide Factsheets

    This page contains a November 1994 fact sheet for the final NESHAP for Gasoline Distribution Facilities. This page also contains a December fact sheet with information regarding the final amendments to the 2003 final rule for the NESHAP.

  6. OSF-distributed computing environment for multimedia telemedicine services in global PACS

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Alsafadi, Yasser H.; Kim, Jinman

    1995-05-01

    In this paper, we present our approach to developing global picture archiving and communication system (PACS) remote consultation and diagnosis (RCD) application using the Open Software Foundation (OSF) Distributed Computing Environment (DCE) services and toolkits. The current RCD system now uses programming services similar to those offered by OSF DCE, the Cell Directory Service, the Distributed Time Service, the Security Service, the RPC Facility, and the Threads Facility. In this research we have formally applied OSF DCE services to the Global PACS RCD software. The use of OSF DCE services for Global PACS enables us to develop a robust distributed structure and new user services which feature reliability and scalability for Global PACS environments.

  7. Have computers, will travel: providing on-site library instruction in rural health facilities using a portable computer lab.

    PubMed

    Neilson, Christine J

    2010-01-01

    The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.

  8. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  9. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE...EmmmmmmEmmmmmE mmmmmmmmmmmmmmlfllfllf EEEEEEEmmmmmEE Sa~WNVS AO nflWl ,VNOIJVN 27 n- -o :1 ~ili0 Technical Report 844 Actors: A Model Of Concurrent...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp -oved I= pblicrelease and sale; itsI

  10. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  11. Distributed sensor networks with collective computation

    SciTech Connect

    Lanman, D. R.

    2001-01-01

    Simulations of a network of N sensors have been performed. The simulation space contains a number of sound sources and a large number of sensors. Each sensor is equipped with an omni-directional microphone and is capable of measuring only the time of arrival of a signal. Sensors are able to wirelessly transmit and receive packets of information, and have some computing power. The sensors were programmed to merge all information (received packets as well as local measurements) into a 'world view' for that node. This world view is then transmitted. In this way, information can slowly diffuse across the network. One node was monitored in the network as a proxy for when information had diffused across the network. Simulations demonstrated that the energy expended per sensor per time step was approximately independent of N.

  12. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  13. Operational facility-integrated computer system for safeguards

    SciTech Connect

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring.

  14. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  15. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  16. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  17. The Penalty of Context-Switch Time in Distributed Computing

    DTIC Science & Technology

    1988-05-13

    Context-switch time is a significant cost in distributed computing , affecting through-put and response time. We report statistics gathered for a large network of Sun 2’s, Sun 3’s and DEC VAX computers.

  18. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  19. Beta distributions: A computer program for probabilities and fractile points

    NASA Technical Reports Server (NTRS)

    Brownlow, J. D.; Swaroop, R.

    1979-01-01

    A beta distribution is specified by range parameters a b, and two shape parameters alpha and beta 0. The computer program presented calculates any desired probability and/or fractile point for specified values of a, b, alpha, and beta. This program additionally computes gamma function values for integer and noninteger arguments.

  20. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  1. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-04

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries.

  2. Toward Distributed Service Discovery in Pervasive Computing Environments

    DTIC Science & Technology

    2006-02-01

    Library at www.computer.org/publications/ dlib . 112 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 5, NO. 2, FEBRUARY 2006 ... Library for Parallel Simulation of Large-Scale Wireless Networks,” Proc. 12th Workshop Parallel and Distributed Simulations, 1998. CHAKRABORTY ET AL...computing, digital libraries , electronic commerce, and trusted information systems. She has published eight books and more than 100 refereed articles in

  3. Survey of Computer Facilities in Minnesota and North Dakota.

    ERIC Educational Resources Information Center

    MacGregor, Donald

    In order to attain a better understanding of the data processing manpower needs of business and industry, a survey instrument was designed and mailed to 570 known and possible computer installations in the Minnesota/North Dakota area. The survey was conducted during the spring of 1975, and concentrated on the kinds of equipment and computer…

  4. NNS computing facility manual P-17 Neutron and Nuclear Science

    SciTech Connect

    Hoeberling, M.; Nelson, R.O.

    1993-11-01

    This document describes basic policies and provides information and examples on using the computing resources provided by P-17, the Neutron and Nuclear Science (NNS) group. Information on user accounts, getting help, network access, electronic mail, disk drives, tape drives, printers, batch processing software, XSYS hints, PC networking hints, and Mac networking hints is given.

  5. Computer Programs for Predicting Private Development of Student Housing Facilities.

    ERIC Educational Resources Information Center

    Graaskamp, James A.

    Low cost as well as timely statistics are required of University policy planning decisions regarding student housing. Since a data bank already existed at the University of Wisconsin, a study of student housing needs could readily be undertaken by means of a computer. The study defines the status of the existing supply and demand in student…

  6. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  7. Computation, measurement and mitigation of neutral-to-earth potentials on electrical distribution systems

    SciTech Connect

    Dick, W.K.; Winter, D.F.

    1987-04-01

    This paper presents computer generated profiles of primary-neutral-to-earth potentials of electrical distribution systems which incorporate a variety of techniques used to mitigate neutral-to-earth potential (''stray voltage'') at dairy farm facilities. Techniques available to the power supplier and power user include an Electronic Grounding System which provides voltage reduction factors of as much as 200 to 1. A new method of measuring these voltages using a computer data acquisition system which monitors every cycle of the power-frequency voltages on eight totally independent channels for extended periods is described.

  8. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  9. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  10. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  11. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  12. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  13. Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges

    NASA Technical Reports Server (NTRS)

    Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.

    2000-01-01

    This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.

  14. Protocols for configuring computation loops on a distributed multiprocessor system

    SciTech Connect

    Woei Lin; Chuan-lin Wu

    1983-01-01

    Protocols for configuring computation loops in a multiprocessing system are examined. Processing nodes are connected by a reconfigurable communication subnet using a multistage interconnection network. Configuration protocols are presented in terms of distributed algorithms such that processing nodes are configured in loop topologies. The configurability of loop topologies is first investigated. It is verified that the communication subnet can emulate loop distributed systems. It is also proven that multiple loops of various lengths can be configured in the distributed network. The technique demonstrated for configuring loop topologies can be used to configure other computation topologies. 6 references.

  15. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Devesas Campos, M.; Tarragon Cros, J.; Gaidioz, B.; Karavakis, E.; Kokoszkiewicz, L.; Lanciotti, E.; Maier, G.; Ollivier, W.; Nowotka, M.; Rocha, R.; Sadykov, T.; Saiz, P.; Sargsyan, L.; Sidorova, I.; Tuckett, D.

    2011-12-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  16. Exact Score Distribution Computation for Similarity Searches in Ontologies

    NASA Astrophysics Data System (ADS)

    Schulz, Marcel H.; Köhler, Sebastian; Bauer, Sebastian; Vingron, Martin; Robinson, Peter N.

    Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., protein function prediction with the Gene Ontology. In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik’s definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the Human Phenotype Ontology.

  17. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  18. COMPUTATIONAL SIMULATION OF REFRIGERATION PROCESS FOR BEPC II SUPERCONDUCTING FACILITIES.

    SciTech Connect

    WANG,L.JIA,L.X.DU,H.P.YANG,G.D.

    2003-09-22

    The main challenge to build the cryogenic system for the Beijing Electron-Positron Collider Upgrade is to accommodate the strong differences among three types of superconducting devices with regard to their structure, location, as well as the cryogenic operating requirement. Three kinds of cooling methods are applied in the overall cryogenic system, saturated liquid helium cooling for the SRF cavities, single-phase helium cooling for the SCQ magnets, and two-phase helium cooling for the SSM solenoid. The optimization for the BEPCII cryogenic system was carried out by using a large-scale computational simulation package. This paper presents thermal parameters and numerical analyses for the BEPCII cryogenic system.

  19. Online measurement of dose and dose distribution at bremsstrahlung facilities

    NASA Astrophysics Data System (ADS)

    Auslender, V. L.; Bryazgin, A. A.; Bukin, A. D.; Voronin, L. A.; Lukin, A. N.; Sidorov, A. V.

    2004-09-01

    A real-time measurement system of the spatial dose distribution is developed and realized for monitoring the bremsstrahlung flow generated on X-ray target by 5 MeV 50 kW electron accelerator. The sensors of the system consist of semiconductor diodes. The beam target and electron accelerator (ILU-10) are briefly described. The practice of using the system in the experimental and start-up procedure is included.

  20. Peta-scale QMC simulations on DOE leadership computing facilities

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Ab Initio Network Collaboration

    2014-03-01

    Continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles. Even with numerous innovations in methods, algorithms and codes, QMC simulations of realistic problems of 1000s and more electrons are demanding, requiring millions of core hours to achieve the target chemical accuracy. The multiple forms of parallelism afforded by QMC algorithms and high compute-to-communication ratio make them ideal candidates for acceleration in the multi/many-core paradigm. We have ported and tuned QMCPACK to recently deployed DOE doca-petaflop systems, Titan (Cray XK7 CPU/GPGPU) and Mira (IBM Blue Gene/Q). The efficiency gains through improved algorithms and architecture-specific tuning and, most importantly, the vast increase in computing powers have opened up opportunities to apply QMC at unprecedent scales, accuracy and time-to-solution. We present large-scale QMC simulations to study energetics of layered materials where vdW interactions play critical roles. Collaboration supported through the Predictive Theory and Modeling for Materials and Chemical Science program by the Basic Energy Science, Department of Energy.

  1. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    SciTech Connect

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  2. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Multiple award schedule... Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply distribution... items. Stocking a variety of commercial, high-demand items purchased from FSS multiple award...

  3. Computer usage among nurses in rural health-care facilities in South Africa: obstacles and challenges.

    PubMed

    Asah, Flora

    2013-04-01

    This study discusses factors inhibiting computer usage for work-related tasks among computer-literate professional nurses within rural healthcare facilities in South Africa. In the past two decades computer literacy courses have not been part of the nursing curricula. Computer courses are offered by the State Information Technology Agency. Despite this, there seems to be limited use of computers by professional nurses in the rural context. Focus group interviews held with 40 professional nurses from three government hospitals in northern KwaZulu-Natal. Contributing factors were found to be lack of information technology infrastructure, restricted access to computers and deficits in regard to the technical and nursing management support. The physical location of computers within the health-care facilities and lack of relevant software emerged as specific obstacles to usage. Provision of continuous and active support from nursing management could positively influence computer usage among professional nurses. A closer integration of information technology and computer literacy skills into existing nursing curricula would foster a positive attitude towards computer usage through early exposure. Responses indicated that change of mindset may be needed on the part of nursing management so that they begin to actively promote ready access to computers as a means of creating greater professionalism and collegiality. © 2011 Blackwell Publishing Ltd.

  4. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  5. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  6. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  7. Computer control and data acquisition system for the R. F. Test Facility

    SciTech Connect

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper.

  8. Status of the National Ignition Facility Integrated Computer Control System (ICCS) on the Path to Ignition

    SciTech Connect

    Lagin, L J; Bettenhauasen, R C; Bowers, G A; Carey, R W; Edwards, O D; Estes, C M; Demaret, R D; Ferguson, S W; . Fisher, J M; Ho, J C; Ludwigsen, A P; Mathisen, D G; Marshall, C D; Matone, J M; McGuigan, D L; Sanchez, R J; Shelton, R T; Stout, E A; Tekle, E; Townsend, S L; Van Arsdall, P J; Wilson, E F

    2007-09-11

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility under construction that will contain a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. NIF is comprised of 24 independent bundles of 8 beams each using laser hardware that is modularized into more than 6,000 line replaceable units such as optical assemblies, laser amplifiers, and multifunction sensor packages containing 60,000 control and diagnostic points. NIF is operated by the large-scale Integrated Computer Control System (ICCS) in an architecture partitioned by bundle and distributed among over 800 front-end processors and 50 supervisory servers. NIF's automated control subsystems are built from a common object-oriented software framework based on CORBA distribution that deploys the software across the computer network and achieves interoperation between different languages and target architectures. A shot automation framework has been deployed during the past year to orchestrate and automate shots performed at the NIF using the ICCS. In December 2006, a full cluster of 48 beams of NIF was fired simultaneously, demonstrating that the independent bundle control system will scale to full scale of 192 beams. At present, 72 beams have been commissioned and have demonstrated 1.4-Megajoule capability of infrared light. During the next two years, the control system will be expanded to include automation of target area systems including final optics, target positioners and

  9. Computer/information security design approaches for Complex 21/Reconfiguration facilities

    SciTech Connect

    Hunteman, W.J.; Zack, N.R.; Jaeger, C.D.

    1993-08-01

    Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.

  10. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2008-01-01

    levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma

  11. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2010-01-01

    levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma

  12. LBNL Computational Research & Theory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2016-07-12

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  13. LBNL Computational Research & Theory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    SciTech Connect

    Yelick, Kathy

    2012-01-01

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  14. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    SciTech Connect

    Yelick, Kathy

    2012-01-01

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  15. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2016-07-12

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  16. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  17. Nonlinear structural analysis on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.

  18. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  19. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  20. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  1. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  2. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    PubMed

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  3. A directory service for configuring high-performance distributed computations

    SciTech Connect

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  4. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  5. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    SciTech Connect

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as

  6. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We

  7. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system

  8. A digital computer propulsion control facility: Description of capabilities and summary of experimental program results

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Arpasi, D. J.; Lehtinen, B.

    1976-01-01

    Flight weight digital computers are being used today to carry out many of the propulsion system control functions previously delegated exclusively to hydromechanical controllers. An operational digital computer facility for propulsion control mode studies has been used successfully in several experimental programs. This paper describes the system and some of the results concerned with engine control, inlet control, and inlet engine integrated control. Analytical designs for the digital propulsion control modes include both classical and modern/optimal techniques.

  9. Evolution of facility layout requirements and CAD (computer-aided design) system development

    SciTech Connect

    Jones, M. )

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described.

  10. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  11. MIP models for connected facility location: A theoretical and computational study.

    PubMed

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-02-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%.

  12. The CT Scanner Facility at Stellenbosch University: An open access X-ray computed tomography laboratory

    NASA Astrophysics Data System (ADS)

    du Plessis, Anton; le Roux, Stephan Gerhard; Guelpa, Anina

    2016-10-01

    The Stellenbosch University CT Scanner Facility is an open access laboratory providing non-destructive X-ray computed tomography (CT) and a high performance image analysis services as part of the Central Analytical Facilities (CAF) of the university. Based in Stellenbosch, South Africa, this facility offers open access to the general user community, including local researchers, companies and also remote users (both local and international, via sample shipment and data transfer). The laboratory hosts two CT instruments, i.e. a micro-CT system, as well as a nano-CT system. A workstation-based Image Analysis Centre is equipped with numerous computers with data analysis software packages, which are to the disposal of the facility users, along with expert supervision, if required. All research disciplines are accommodated at the X-ray CT laboratory, provided that non-destructive analysis will be beneficial. During its first four years, the facility has accommodated more than 400 unique users (33 in 2012; 86 in 2013; 154 in 2014; 140 in 2015; 75 in first half of 2016), with diverse industrial and research applications using X-ray CT as means. This paper summarises the existence of the laboratory's first four years by way of selected examples, both from published and unpublished projects. In the process a detailed description of the capabilities and facilities available to users is presented.

  13. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  14. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  15. The National Ignition Facility: Status of the Integrated Computer Control System

    SciTech Connect

    Van Arsdall, P J; Bryant, R; Carey, R; Casavant, D; Demaret, R; Edwards, O; Ferguson, W; Krammen, J; Lagin, L; Larson, D; Lee, A; Ludwigsen, P; Miller, M; Moses, E; Nyholm, R; Reed, R; Shelton, R; Wuest, C

    2003-10-13

    The National Ignition Facility (NIF), currently under construction at the Lawrence Livermore National Laboratory, is a stadium-sized facility containing a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for nearly 100 experimental diagnostics. When completed, NIF will be the world's largest and most energetic laser experimental system, providing an international center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. NIF's 192 energetic laser beams will compress fusion targets to conditions required for thermonuclear burn, liberating more energy than required to initiate the fusion reactions. Laser hardware is modularized into line replaceable units such as deformable mirrors, amplifiers, and multi-function sensor packages that are operated by the Integrated Computer Control System (ICCS). ICCS is a layered architecture of 300 front-end processors attached to nearly 60,000 control points and coordinated by supervisor subsystems in the main control room. The functional subsystems--beam control including automatic beam alignment and wavefront correction, laser pulse generation and pre-amplification, diagnostics, pulse power, and timing--implement automated shot control, archive data, and support the actions of fourteen operators at graphic consoles. Object-oriented software development uses a mixed language environment of Ada (for functional controls) and Java (for user interface and database backend). The ICCS distributed software framework uses CORBA to communicate between languages and processors. ICCS software is approximately three quarters complete with over 750 thousand source lines of code having undergone off-line verification tests and deployed to the facility. NIF has entered the first phases of its laser commissioning program. NIF's highest 3{omega} single laser beam performance is 10.4 kJ, equivalent to 2 MJ for a fully

  16. 300 Area Treated Effluent Disposal Facility computer software release cover sheet and revision record

    SciTech Connect

    McCarthy, R.J.

    1994-11-28

    This supporting document contains the computer software release cover sheet and revision records for the 300 Area Treated Effluent Disposal Facility (TEDF). The previous revision was controlled by CH2M Hill which developed the software. A 7-page listing of the contents of directory C:{backslash}TEDF is contained in this report.

  17. An approach to experimental evaluation of real-time fault-tolerant distributed computing schemes

    NASA Technical Reports Server (NTRS)

    Kim, K. H.

    1989-01-01

    A testbed-based approach to the evaluation of fault-tolerant distributed computing schemes is discussed. The approach is based on experimental incorporation of system structuring and design techniques into real-time distributed-computing testbeds centered around tightly coupled microcomputer networks. The effectiveness of this approach has been experimentally confirmed. Primary advantages of this approach include the accuracy of the timing and logical-complexity data and the degree of assurance of the practical effectiveness of the scheme evaluated. Various design issues encountered in the course of establishing the network testbed facilities are discussed, along with their augmentation to support some experiments. The shortcomings of the testbeds are also discussed together with the desired extensions of the testbeds.

  18. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  19. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  20. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  1. Accelerating Computation of DNA Sequence Alignment in Distributed Environment

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Li, Guiyang; Deaton, Russel

    Sequence similarity and alignment are most important operations in computational biology. However, analyzing large sets of DNA sequence seems to be impractical on a regular PC. Using multiple threads with JavaParty mechanism, this project has successfully implemented in extending the capabilities of regular Java to a distributed environment for simulation of DNA computation. With the aid of JavaParty and the design of multiple threads, the results of this study demonstrated that the modified regular Java program could perform parallel computing without using RMI or socket communication. In this paper, an efficient method for modeling and comparing DNA sequences with dynamic programming and JavaParty was firstly proposed. Additionally, results of this method in distributed environment have been discussed.

  2. A fault detection service for wide area distributed computations.

    SciTech Connect

    Stelling, P.

    1998-06-09

    The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.

  3. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  4. Performance Assessment of OVERFLOW on Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  5. Performance Assessment of OVERFLOW on Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  6. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  7. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Astrophysics Data System (ADS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  8. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  10. A Reliable Distributed Computing System Architecture for Planetary Rover

    NASA Astrophysics Data System (ADS)

    Jingping, C.; Yunde, J.

    Computing system is one of the most important parts in planetary rover Computing system is crucial to the rover function capability and survival probability When the planetary rover executes some tasks it needs to react to the events in time and to tolerant the faults cause by the environment or itself To meet the requirements the planetary rover computing system architecture should be reactive high reliable adaptable consistent and extendible This paper introduces reliable distributed computing system architecture for planetary rover This architecture integrates the new ideas and technologies of hardware architecture software architecture network architecture fault tolerant technology and the intelligent control system architecture The planetary computing system architecture defines three dimensions of fault containment regions the channel dimension the lane dimension and the integrity dimension The whole computing system has three channels The channels provide the main fault containment regions for system hardware It is the ultimate line of defense of a single physical fault The lanes are the secondary fault containment regions for physical faults It can be used to improve the capability for fault diagnosis within a channel and can improve the coverage with respect to design faults through hardware and software diversity It also can be used as backups for each others to improve the availability and can improve the computing capability The integrity dimension provides faults containment region for software design Its purpose

  11. A facile synthesis of Te nanoparticles with binary size distribution by green chemistry.

    PubMed

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E; Dickerson, James H

    2011-04-01

    Our work reports a facile route to colloidal Te nanocrystals with binary uniform size distributions at room temperature. The binary-sized Te nanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated.

  12. Spatio-temporal distribution of stored-product inects around food processing and storage facilities

    USDA-ARS?s Scientific Manuscript database

    Grain storage and processing facilities consist of a landscape of indoor and outdoor habitats that can potentially support stored-product insect pests, and understanding patterns of species diversity and spatial distribution in the landscape surrounding structures can provide insight into how the ou...

  13. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  14. Feasibility Study for a Remote Terminal Central Computing Facility Serving School and College Institutions. Volume I, Functional Requirements.

    ERIC Educational Resources Information Center

    International Business Machines Corp., White Plains, NY.

    The economic and technical feasibility of providing a remote terminal central computing facility to serve a group of 25-75 secondary schools and colleges was investigated. The general functions of a central facility for an educational cluster were defined to include training in computer techniques, the solution of student development problems in…

  15. Improving flow distribution in influent channels using computational fluid dynamics.

    PubMed

    Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae

    2016-10-01

    Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.

  16. File and metadata management for BESIII distributed computing

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Lin, L.; Deng, Z. Y.; Li, W. D.; Zhang, X. M.; Zheng, Y. H.

    2012-12-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e+e- collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/phi and phi’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  17. Distributed storage and cloud computing: a test case

    NASA Astrophysics Data System (ADS)

    Piano, S.; Delia Ricca, G.

    2014-06-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  18. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  19. EST analysis pipeline: use of distributed computing resources.

    PubMed

    González, Francisco Javier; Vizcaíno, Juan Antonio

    2011-01-01

    This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2).

  20. ISIS: A System for Fault-Tolerant Distributed Computing

    DTIC Science & Technology

    1986-04-01

    New Yorit aÄIJ (3 DT1C ELECTE APR 1 ? 1986 P D ISIS: A System for Fault-Tolerant Distributed Computing* Kenneth P. Birman TR »6-744 April...Department of Computer Science Cornell University, Ithaca, New York Accesion For NTIS CRA&I DTIC TAB U;.annouMced Justification i u D Diit...A . «Jl .„ _* , a . 2. RedUent objects 7575 extends a conventional operating system by introducing a new programming abstraction, the resiliera

  1. Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

    NASA Astrophysics Data System (ADS)

    Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration

    2014-06-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  2. Computed voltage distributions around solar electric propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    The NASA Charging Analyzer Program is used to conduct preliminary computations of the voltage distributions around such large spacecraft in geomagnetic substorm environments at geosynchronous altitudes. Both a standard operating voltage (+ or - 150 volts on solar arrays) and direct-drive (+1200 volts on arrays) configurations are considered. Thruster-off simulations are computed for both operating voltage configurations while the effect of simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster operations appear to alleviate surface charging.

  3. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  4. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  5. Radar data processing using a distributed computational system

    NASA Astrophysics Data System (ADS)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  6. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  7. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  8. A biological solution to a fundamental distributed computing problem.

    PubMed

    Afek, Yehuda; Alon, Noga; Barad, Omer; Hornstein, Eran; Barkai, Naama; Bar-Joseph, Ziv

    2011-01-14

    Computational and biological systems are often distributed so that processors (cells) jointly solve a task, without any of them receiving all inputs or observing all outputs. Maximal independent set (MIS) selection is a fundamental distributed computing procedure that seeks to elect a set of local leaders in a network. A variant of this problem is solved during the development of the fly's nervous system, when sensory organ precursor (SOP) cells are chosen. By studying SOP selection, we derived a fast algorithm for MIS selection that combines two attractive features. First, processors do not need to know their degree; second, it has an optimal message complexity while only using one-bit messages. Our findings suggest that simple and efficient algorithms can be developed on the basis of biologically derived insights.

  9. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  10. Computer-aided coordination and overcurrent protection for distribution systems

    SciTech Connect

    Tolbert, L.M.

    1995-03-01

    Overcurrent protection and coordination studies for electrical distribution systems have become much easier to perform with the emergence of several commercially available software programs that run on a personal computer. These programs have built-in libraries of protective device time-current curves, damage curves for cable and transformers, and motor starting curves, thereby facilitating the design of a selectively coordinated protection system which is also well-protected. Additionally, design time when utilizing computers is far less than the previous method of tracing manufacturers` curves on transparent paper. Basic protection and coordination principles are presented in this paper along with several helpful suggestions for designing electrical protection systems. A step-by-step methodology is presented to illustrate the design concepts when using software for selecting and coordinating the protective devices in distribution systems.

  11. Semiquantum key distribution with secure delegated quantum computation.

    PubMed

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-27

    Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.

  12. Optimal eigenvalue computation on distributed-memory MIMD multiprocessors

    SciTech Connect

    Crivelli, S.; Jessup, E. R.

    1992-10-01

    Simon proves that bisection is not the optimal method for computing an eigenvalue on a single vector processor. In this paper, we show that his analysis does not extend in a straightforward way to the computation of an eigenvalue on a distributed-memory MIMD multiprocessor. In particular, we show how the optimal number of sections (and processors) to use for multisection depends on variables such as the matrix size and certain parameters inherent to the machine. We also show that parallel multisection outperforms the variant of parallel bisection proposed by Swarztrauber or this problem on a distributed-memory MIMD multiprocessor. We present the results of experiments on the 64-processor Intel iPSC/2 hypercube and the 512-processor Intel Touchstone Delta mesh multiprocessor.

  13. Power Hardware-in-the-Loop (PHIL) Testing Facility for Distributed Energy Storage (Poster)

    SciTech Connect

    Neubauer.J.; Lundstrom, B.; Simpson, M.; Pratt, A.

    2014-06-01

    The growing deployment of distributed, variable generation and evolving end-user load profiles presents a unique set of challenges to grid operators responsible for providing reliable and high quality electrical service. Mass deployment of distributed energy storage systems (DESS) has the potential to solve many of the associated integration issues while offering reliability and energy security benefits other solutions cannot. However, tools to develop, optimize, and validate DESS control strategies and hardware are in short supply. To fill this gap, NREL has constructed a power hardware-in-the-loop (PHIL) test facility that connects DESS, grid simulator, and load bank hardware to a distribution feeder simulation.

  14. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    DTIC Science & Technology

    2006-08-01

    This isolates the skateboard as the one that doesn’t belong. Certain automatic, attention-shifting mechanisms will be required in our model . We...STINFO COPY AFRL-HE-WP-TR-2006-0160 Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model Robert G. Eggleston...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions , searching

  15. Liquid rocket performance computer model with distributed energy release

    NASA Technical Reports Server (NTRS)

    Combs, L. P.

    1972-01-01

    Development of a computer program for analyzing the effects of bipropellant spray combustion processes on liquid rocket performance is described and discussed. The distributed energy release (DER) computer program was designed to become part of the JANNAF liquid rocket performance evaluation methodology and to account for performance losses associated with the propellant combustion processes, e.g., incomplete spray gasification, imperfect mixing between sprays and their reacting vapors, residual mixture ratio striations in the flow, and two-phase flow effects. The DER computer program begins by initializing the combustion field at the injection end of a conventional liquid rocket engine, based on injector and chamber design detail, and on propellant and combustion gas properties. It analyzes bipropellant combustion, proceeding stepwise down the chamber from those initial conditions through the nozzle throat.

  16. Information modification and particle collisions in distributed computation.

    PubMed

    Lizier, Joseph T; Prokopenko, Mikhail; Zomaya, Albert Y

    2010-09-01

    Distributed computation can be described in terms of the fundamental operations of information storage, transfer, and modification. To describe the dynamics of information in computation, we need to quantify these operations on a local scale in space and time. In this paper we extend previous work regarding the local quantification of information storage and transfer, to explore how information modification can be quantified at each spatiotemporal point in a system. We introduce the separable information, a measure which locally identifies information modification events where separate inspection of the sources to a computation is misleading about its outcome. We apply this measure to cellular automata, where it is shown to be the first direct quantitative measure to provide evidence for the long-held conjecture that collisions between emergent particles therein are the dominant information modification events.

  17. Ensuring data consistency over CMS distributed computing system

    SciTech Connect

    Rossman, Paul; /Fermilab

    2009-05-01

    CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 petabytes of space. Maintaining consistency between the data bookkeeping, the data transfer system, and physical storage is an interesting technical and operations challenge. In this paper we will discuss the CMS effort to ensure that data is consistently available at all computing centers. We will discuss the technical tools that monitor the consistency of the catalogs and the physical storage as well as the operations model used to find and solve inconsistencies.

  18. Multi-VO support in IHEP's distributed computing environment

    NASA Astrophysics Data System (ADS)

    Yan, T.; Suo, B.; Zhao, X. H.; Zhang, X. M.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting.

  19. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  20. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  1. Distributing Data from Desktop to Hand-Held Computers

    NASA Technical Reports Server (NTRS)

    Elmore, Jason L.

    2005-01-01

    A system of server and client software formats and redistributes data from commercially available desktop to commercially available hand-held computers via both wired and wireless networks. This software is an inexpensive means of enabling engineers and technicians to gain access to current sensor data while working in locations in which such data would otherwise be inaccessible. The sensor data are first gathered by a data-acquisition server computer, then transmitted via a wired network to a data-distribution computer that executes the server portion of the present software. Data in all sensor channels -- both raw sensor outputs in millivolt units and results of conversion to engineering units -- are made available for distribution. Selected subsets of the data are transmitted to each hand-held computer via the wired and then a wireless network. The selection of the subsets and the choice of the sequences and formats for displaying the data is made by means of a user interface generated by the client portion of the software. The data displayed on the screens of hand-held units can be updated at rates from 1 to

  2. Lightweight distributed computing for intraoperative real-time image guidance

    NASA Astrophysics Data System (ADS)

    Suwelack, Stefan; Katic, Darko; Wagner, Simon; Spengler, Patrick; Bodenstedt, Sebastian; Röhl, Sebastian; Dillmann, Rüdiger; Speidel, Stefanie

    2012-02-01

    In order to provide real-time intraoperative guidance, computer assisted surgery (CAS) systems often rely on computationally expensive algorithms. The real-time constraint is especially challenging if several components such as intraoperative image processing, soft tissue registration or context aware visualization are combined in a single system. In this paper, we present a lightweight approach to distribute the workload over several workstations based on the OpenIGTLink protocol. We use XML-based message passing for remote procedure calls and native types for transferring data such as images, meshes or point coordinates. Two different, but typical scenarios are considered in order to evaluate the performance of the new system. First, we analyze a real-time soft tissue registration algorithm based on a finite element (FE) model. Here, we use the proposed approach to distribute the computational workload between a primary workstation that handles sensor data processing and visualization and a dedicated workstation that runs the real-time FE algorithm. We show that the additional overhead that is introduced by the technique is small compared to the total execution time. Furthermore, the approach is used to speed up a context aware augmented reality based navigation system for dental implant surgery. In this scenario, the additional delay for running the computationally expensive reasoning server on a separate workstation is less than a millisecond. The results show that the presented approach is a promising strategy to speed up real-time CAS systems.

  3. Simulation of emission tomography using grid middleware for distributed computing.

    PubMed

    Thomason, M G; Longton, R F; Gregor, J; Smith, G T; Hutson, R K

    2004-09-01

    SimSET is Monte Carlo simulation software for emission tomography. This paper describes a simple but effective scheme for parallel execution of SimSET using NetSolve, a client-server system for distributed computation. NetSolve (version 1.4.1) is "grid middleware" which enables a user (the client) to run specific computations remotely and simultaneously on a grid of networked computers (the servers). Since the servers do not have to be identical machines, computation may take place in a heterogeneous environment. To take advantage of diversity in machines and their workloads, a client-side scheduler was implemented for the Monte Carlo simulation. The scheduler partitions the total decay events by taking into account the inherent compute-speeds and recent average workloads, i.e., the scheduler assigns more decay events to processors expected to give faster service and fewer decay events to those expected to give slower service. When compute-speeds and sustained workloads are taken into account, the speed-up is essentially linear in the number of equivalent "maximum-service" processors. One modification in the SimSET code (version 2.6.2.3) was made to ensure that the total number of decay events specified by the user is maintained in the distributed simulation. No other modifications in the standard SimSET code were made. Each processor runs complete SimSET code for its assignment of decay events, independently of others running simultaneously. Empirical results are reported for simulation of a clinical-quality lung perfusion study.

  4. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    SciTech Connect

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities.

  5. Opportunities for artificial intelligence application in computer- aided management of mixed waste incinerator facilities

    SciTech Connect

    Rivera, A.L.; Ferrada, J.J.; Singh, S.P.N.

    1992-05-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site. It is designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conservation and Recovery Act (RCRA). This facility, known as the TSCA Incinerator, services seven DOE/OR installations. This incinerator was recently authorized for production operation in the United States for the processing of mixed (radioactively contaminated-chemically hazardous) wastes as regulated under TSCA and RCRA. Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. These requirements impact the characteristics and disposition of incinerator residues, limits the quality of liquid and gaseous effluents, limit the characteristics and rates of waste feeds and operating conditions, and restrict the handling of the waste feed inventories. This incinerator facility presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. Demonstrated computer-aided management systems could be transferred to future mixed waste incinerator facilities.

  6. Report of the Ad-Hoc Combustion Research Facility Committee on Computational Resources for Combustion Research

    SciTech Connect

    McLean, W.J.

    1983-08-01

    This report was prepared by the Combustion Research Facility Ad Hoc Committee on Computational Resources for Combustion Research. The committee was asked by Peter L. Mattern to determine CRF computer needs for 1983 to 1988, including consideration of CRF support of computing needs of DOE-sponsored combustion research programs outside Sandia. In brief we find that advancing the understanding of the chemical and physical processes in combustion will require a rapidly increasing use of fast supercomputers with large memories. The acquisition and unclassified availability of such a machine at Sandia National Laboratories, Livermore, will be an important step in maintaining our international leadership in computational aspects of combustion research. Such a machine would also enable us to interact with other DOE sponsored researchers by providing them with access to a supercomputer and to our extensive combustion related software.

  7. 120. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  8. 122. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  9. 118. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  10. 121. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  11. 119. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  12. 117. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  13. National Ignition Facility computational fluid dynamics modeling and light fixture case studies

    SciTech Connect

    Martin, R.; Bernardin, J.; Parietti, L.; Dennison, B.

    1998-02-01

    This report serves as a guide to the use of computational fluid dynamics (CFD) as a design tool for the National Ignition Facility (NIF) program Title I and Title II design phases at Lawrence Livermore National Laboratory. In particular, this report provides general guidelines on the technical approach to performing and interpreting any and all CFD calculations. In addition, a complete CFD analysis is presented to illustrate these guidelines on a NIF-related thermal problem.

  14. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  15. Multi-threaded, discrete event simulation of distributed computing systems

    NASA Astrophysics Data System (ADS)

    Legrand, Iosif; MONARC Collaboration

    2001-10-01

    The LHC experiments have envisaged computing systems of unprecedented complexity, for which is necessary to provide a realistic description and modeling of data access patterns, and of many jobs running concurrently on large scale distributed systems and exchanging very large amounts of data. A process oriented approach for discrete event simulation is well suited to describe various activities running concurrently, as well the stochastic arrival patterns specific for such type of simulation. Threaded objects or "Active Objects" can provide a natural way to map the specific behaviour of distributed data processing into the simulation program. The simulation tool developed within MONARC is based on Java (TM) technology which provides adequate tools for developing a flexible and distributed process oriented simulation. Proper graphics tools, and ways to analyze data interactively, are essential in any simulation project. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modeling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures, from centralized to highly distributed. Comparison between queuing theory and realistic client-server measurements is also presented.

  16. Using mobile distributed pyrolysis facilities to deliver a forest residue resource for bio-fuel production

    NASA Astrophysics Data System (ADS)

    Brown, Duncan

    Distributed mobile conversion facilities using either fast pyrolysis or torrefaction processes can be used to convert forest residues to more energy dense substances (bio-oil, bio-slurry or torrefied wood) that can be transported as feedstock for bio-fuel facilities. All feedstock are suited for gasification, which produces syngas that can be used to synthesise petrol or diesel via Fischer-Tropsch reactions, or produce hydrogen via water gas shift reactions. Alternatively, the bio-oil product of fast pyrolysis may be upgraded to produce petrol and diesel, or can undergo steam reformation to produce hydrogen. Implementing a network of mobile facilities reduces the energy content of forest residues delivered to a bio-fuel facility as mobile facilities use a fraction of the biomass energy content to meet thermal or electrical demands. The total energy delivered by bio-oil, bio-slurry and torrefied wood is 45%, 65% and 87% of the initial forest residue energy content, respectively. However, implementing mobile facilities is economically feasible when large transport distances are required. For an annual harvest of 1.717 million m3 (equivalent to 2000 ODTPD), transport costs are reduced to less than 40% of the total levelised delivered feedstock cost when mobile facilities are implemented; transport costs account for up to 80% of feedstock costs for conventional woodchip delivery. Torrefaction provides the lowest cost pathway of delivering a forest residue resource when using mobile facilities. Cost savings occur against woodchip delivery for annual forest residue harvests above 2.25 million m3 or when transport distances greater than 250 km are required. Important parameters that influence levelised delivered costs of feedstock are transport distances (forest residue spatial density), haul cost factors, thermal and electrical demands of mobile facilities, and initial moisture content of forest residues. Relocating mobile facilities can be optimised for lowest cost

  17. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  18. [The Computer Competency of Nurses in Long-Term Care Facilities and Related Factors].

    PubMed

    Chang, Ya-Ping; Kuo, Huai-Ting; Li, I-Chuan

    2016-12-01

    It is important for nurses who work in long-term care facilities (LTCFs) to have an adequate level of computer competency due to the multidisciplinary and comprehensive nature of long-term care services. Thus, it is important to understand the current computer competency of nursing staff in LTCFs and the factors that relate to this competency. To explore the computer competency of LTCF nurses and to identify the demographic and computer-usage characteristics that relate significantly to computer competency in the LTCF environment. A cross-sectional research design and a self-report questionnaire were used to collect data from 185 nurses working at LTCFs in Taipei. The results found that the variables of the frequency of computer use (β = .33), age (β = -.30), type(s) of the software used at work (β = .28), hours of on-the-job training (β = -.14), prior work experience at other LTCFs (β = -.14), and Internet use at home (β = .12) explain 58.0% of the variance in the computer competency of participants. The results of the present study suggest that the following measures may help increase the computer competency of LTCF nurses. (1) Nurses should be encouraged to use electronic nursing records rather than handwritten records. (2) On-the-job training programs should emphasize participant competency in the Excel software package in order to maintain efficient and good-quality of LTC services after implementing of the LTC insurance policy.

  19. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  20. An environmental testing facility for Space Station Freedom power management and distribution hardware

    NASA Technical Reports Server (NTRS)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-01-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  1. An environmental testing facility for Space Station Freedom power management and distribution hardware

    NASA Astrophysics Data System (ADS)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-11-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  2. Health workers' knowledge of and attitudes towards computer applications in rural African health facilities.

    PubMed

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje

    2014-01-01

    The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers - average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.

  3. Health workers’ knowledge of and attitudes towards computer applications in rural African health facilities

    PubMed Central

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E.; Blank, Antje

    2014-01-01

    Background The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. Objective To report an assessment of health providers’ computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. Design A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. Results A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers – average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Conclusions Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology. PMID:25361721

  4. Health workers' knowledge of and attitudes towards computer applications in rural African health facilities.

    PubMed

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E; Blank, Antje

    2014-12-01

    Background The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. Objective To report an assessment of health providers' computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. Design A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. Results A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers - average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Conclusions Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology.

  5. Heat Tracing Percolation in Managed Aquifer Recharge Facilities using Fiber Optic Distributed Temperature Sensing

    NASA Astrophysics Data System (ADS)

    Becker, M.; Ellis, W.; Bauer, B.; Hutchinson, A.

    2013-12-01

    Percolation rates in Managed Aquifer Rechage (MAR) facilities, such as recharge basins and stream channels, can vary widely through both time and space. Natural variations in sediment hydraulic conductivity can create 'dead zones' in which percolation rates are negligible. Clogging is a constant problem, leading to decays in facility percolation rates . Measuring percolation rate variations is important for management, maintenance, and remediation of surface MAR facilities We have used Fiber Optic Distributed Temperature Sensing (FODTS) to monitor percolation in two very different recharge facilities. The first is a small (2 ha) nearly round recharge basin of homogeneous sediment type in which water balance can be closely monitored. The second is a long narrow river channel separated from an active river by a levee. The alluvial sediment in the river channel varies widely in texture and water balance is difficult to monitor independently. Both facilities were monitored by trenching in fiber optic cable and measuring the propagation rate of the diurnal temperature oscillations carried downward with infiltrating water. In this way, heat was used as a tracer of percolation rates along the section defined by the trenched cable (400 and 1600 m, respectively). We were able to confirm the FODTS measurements of percolation in the recharge basin and demonstrate its wide applicability in the river channel. Results from the measurements have been used to understand both the hydraulic behavior of percolation in the facilities and to make management decisions regarding facility operations and the potential need for additional surface sediment remediation. Estimation of specific discharge (m/day) through the basin using the wavelet method. Basin stage is shown above

  6. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    NASA Astrophysics Data System (ADS)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  7. Probability distributions of molecular observables computed from Markov models.

    PubMed

    Noé, Frank

    2008-06-28

    Molecular dynamics (MD) simulations can be used to estimate transition rates between conformational substates of the simulated molecule. Such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions. In turn, it induces uncertainties in any property computed from the simulation, such as free energy differences or the time scales involved in the system's kinetics. Assessing these uncertainties is essential for testing the reliability of a given observation and also to plan further simulations in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of any observable of an MD simulation provided that one can identify conformational substates such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov or Master equation dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows physically meaningful constraints to be included, such as sampling only matrices that fulfill detailed balance, or matrices that produce a predefined equilibrium distribution of states. The method is illustrated on mus MD simulations of a hexapeptide for which the distributions and uncertainties of the free energy differences between conformations, the transition matrix elements, and the transition matrix eigenvalues are estimated. It is found that both constraints, detailed balance and predefined equilibrium distribution, can significantly reduce the uncertainty of some observables.

  8. Advances in the archiving and distribution facilities at the Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Postman, Marc; Pollizzi, Joseph; Richon, J.

    1998-07-01

    The Hubble Data Archive at the Space Telescope Science Institute contains over 4.3 TB of data, primarily for the Hubble Space Telescope, but also from complementary space- based and ground-based facilities. We are in the process of upgrading and generalizing many of the HDA's component system, developing tools to provide more integrated access to the HDA holdings, and working with other major data providing organizations to implement global data location services for astronomy and other space science disciplines. This paper describes the key elements of our archiving and data distribution systems, including a planned transition to DVD media, data compression, data segregation, on-the-fly calibration, an engineering data warehouse, and distributed search and retrieval facilities.

  9. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    Software tools like MELTS (Ghiorso and Sack, 1995, CMP 119:197) and its derivatives (Ghiorso et al., 2002, G3 3:10.1029/2001GC000217) are sophisticated calculators used by geoscientists to quantify the chemistry of melt production, transport and storage. These tools utilize computational thermodynamics to evaluate the equilibrium state of the system under specified external conditions by minimizing a suitably constructed thermodynamic potential. Like any thermodynamically based tool, the principal advantage in employing these techniques to model igneous processes is the intrinsic ability to couple the chemistry and energetics of the evolution of the system in a self consistent and rigorous formalism. Access to MELTS is normally accomplished via a standalone X11-based executable or as a Java-based web applet. The latter is a dedicated client-server application rooted at the University of Chicago. Our on-going objective is the development of a distributed computing infrastructure to provide "MELTS-like" computations on demand to remote network users by utilizing a language independent client-server protocol based on CORBA. The advantages of this model are numerous. First, the burden of implementing and executing MELTS computations is centralized with a software implementation optimized to a compute cluster dedicated for that purpose. Improvements and updates to MELTS software are handled locally on the server side without intervention of the user and the server-model lessens the burden of supporting the computational code on a variety of hardware and OS platforms. Second, the client hardware platform does not incur the computational cost of performing a MELTS simulation and the remote user can focus on the task of incorporating results into their model. Third, the client user can write software in a computer language of their choosing and procedural calls to the MELTS library can be executed transparently over the network as if a local language-compatible library of

  10. Parallel matrix transpose algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, J.; Walker, D.W.; Dongarra, J.J. |

    1993-10-01

    This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.

  11. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  12. High performance computational chemistry: Towards fully distributed parallel algorithms

    SciTech Connect

    Guest, M.F.; Apra, E.; Bernholdt, D.E.

    1994-07-01

    An account is given of work in progress within the High Performance Computational Chemistry Group (HPCC) at the Pacific Northwest Laboratory (PNL) to develop molecular modeling software applications for massively parallel processors (MPPs). A discussion of the issues in developing scalable parallel algorithms is presented, with a particular focus on the distribution, as opposed to the replication, of key data structures. Replication of large data structures limits the maximum calculation size by imposing a low ratio of processors to memory. Only applications that distribute both data and computation across processors are truly scalable. The use of shared data structures, which may be independently accessed by each process even in a distributed-memory environment, greatly simplifies development and provides a significant performance enhancement. In describing tools to support this programming paradigm, an outline is given of the implementation and performance of a highly efficient and scalable algorithm to perform quadratically convergent, self-consistent field calculations on molecular systems. A brief account is given of the development of corresponding MPP capabilities in the areas of periodic Hartree Fock, Moeller-Plesset perturbation theory (MP2), density functional theory, and molecular dynamics. Performance figures are presented using both the Intel Touchstone Delta and Kendall Square Research KSR-2 supercomputers.

  13. Facilities

    NASA Technical Reports Server (NTRS)

    1999-01-01

    An expansion of medical data collection facilities was necessary to implement the Extended Duration Orbiter Medical Project (EDOMP). The primary objective of the EDOMP was to ensure the capability of crew members to reenter the Earth's atmosphere, land, and egress safely following a 16-day flight. Therefore, access to crew members as soon as possible after landing was crucial for most data collection activities. Also, with the advent of EDOMP, the quantity of investigations increased such that the landing day maximum data collection time increased accordingly from two hours to four hours. The preflight and postflight testing facilities at the Johnson Space Center (JSC) required only some additional testing equipment and minor modifications to the existing laboratories in order to fulfill EDOMP requirements. Necessary modifications at the landing sites were much more extensive.

  14. A distributed computing tool for generating neural simulation databases.

    PubMed

    Calin-Jageman, Robert J; Katz, Paul S

    2006-12-01

    After developing a model neuron or network, it is important to systematically explore its behavior across a wide range of parameter values or experimental conditions, or both. However, compiling a very large set of simulation runs is challenging because it typically requires both access to and expertise with high-performance computing facilities. To lower the barrier for large-scale model analysis, we have developed NeuronPM, a client/server application that creates a "screen-saver" cluster for running simulations in NEURON (Hines & Carnevale, 1997). NeuronPM provides a user-friendly way to use existing computing resources to catalog the performance of a neural simulation across a wide range of parameter values and experimental conditions. The NeuronPM client is a Windows-based screen saver, and the NeuronPM server can be hosted on any Apache/PHP/MySQL server. During idle time, the client retrieves model files and work assignments from the server, invokes NEURON to run the simulation, and returns results to the server. Administrative panels make it simple to upload model files, define the parameters and conditions to vary, and then monitor client status and work progress. NeuronPM is open-source freeware and is available for download at http://neuronpm.homeip.net . It is a useful entry-level tool for systematically analyzing complex neuron and network simulations.

  15. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.

  16. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  17. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  18. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  19. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  20. Job monitoring on DIRAC for Belle II distributed computing

    NASA Astrophysics Data System (ADS)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  1. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  2. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  3. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  4. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  5. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    SciTech Connect

    Carter, R.L. Jr.

    1994-11-07

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS).

  6. Where and Why Students Choose to Use Computer Facilities: A Collaborative Study at an Australian and United Kingdom University

    ERIC Educational Resources Information Center

    Burke, Liz; Beranek, Lea; Walton, Graham; Stubbings, Ruth

    2008-01-01

    The authors describe a collaborative study at two universities, one in Australia and the other in the UK. The main objectives of the study were to gain an understanding of the factors that influence a student's choice of location when using computing facilities, what applications they use, and how adequate various services and facilities provided…

  7. Where and Why Students Choose to Use Computer Facilities: A Collaborative Study at an Australian and United Kingdom University

    ERIC Educational Resources Information Center

    Burke, Liz; Beranek, Lea; Walton, Graham; Stubbings, Ruth

    2008-01-01

    The authors describe a collaborative study at two universities, one in Australia and the other in the UK. The main objectives of the study were to gain an understanding of the factors that influence a student's choice of location when using computing facilities, what applications they use, and how adequate various services and facilities provided…

  8. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  9. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  10. Performance evaluation of communication software systems for distributed computing

    NASA Astrophysics Data System (ADS)

    Fatoohi, R. A.

    1997-09-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better equipped to deal with complex systems while providing extensibility, maintainability and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI and ATM. The performance results for three communication software systems are presented, analysed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  11. Computational exploration of mobile ion distributions around RNA duplex

    PubMed Central

    Kirmizialtin, Serdal; Elber, Ron

    2010-01-01

    Atomically detailed distributions of ions around an A-form RNA are computed. Different mixtures of monovalent and divalent ions are considered explicitly. Studies of tightly bound and of diffusive (but bound) ions around 25 base pairs RNA are conducted in explicit solvent. Replica exchange simulations provide detailed equilibrium distributions with moderate computing resources (20 nanoseconds of simulation using 64 replicas). The simulations show distinct behavior of single and doubly charged cations. Binding of Mg2+ ion includes tight binding to specific sites while Na+ binds only diffusively. The tight binding of Mg2+ is with a solvation shell while Na+ can bind directly to RNA. Negative mobile ions can be found near the RNA but must be assisted by proximate and mobile cations. At distances larger than 16Å from the RNA center, a model of RNA as charged rod in a continuum of ionic solution provides quantitative description of the ion density (the same as in atomically detailed simulation). At shorter distances, the structure of RNA (and ions) have significant impact on the pair correlation functions. Predicted binding sites of Mg2+ at the RNA surface are in accord with structures from crystallography. Electric field relaxation is investigated. The relaxation due to solution rearrangements is completed in tens of picoseconds, while the contribution of RNA tumbling continues to a few nanoseconds. PMID:20518549

  12. A uniform approach for programming distributed heterogeneous computing systems.

    PubMed

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  13. GAiN: Distributed Array Computation with Python

    SciTech Connect

    Daily, Jeffrey A.

    2009-05-01

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  14. CORBA-Based Distributed Software Framework for the NIF Integrated Computer Control System

    SciTech Connect

    Stout, E A; Carey, R W; Estes, C M; Fisher, J M; Lagin, L J; Mathisen, D G; Reynolds, C A; Sanchez, R J

    2007-11-20

    The National Ignition Facility (NIF), currently under construction at the Lawrence Livermore National Laboratory, is a stadium-sized facility containing a 192-beam, 1.8 Megajoule, 500-Terawatt, ultra-violet laser system together with a 10-meter diameter target chamber with room for nearly 100 experimental diagnostics. The NIF is operated by the Integrated Computer Control System (ICCS) which is a scalable, framework-based control system distributed over 800 computers throughout the NIF. The framework provides templates and services at multiple levels of abstraction for the construction of software applications that communicate via CORBA (Common Object Request Broker Architecture). Object-oriented software design patterns are implemented as templates and extended by application software. Developers extend the framework base classes to model the numerous physical control points and implement specializations of common application behaviors. An estimated 140 thousand software objects, each individually addressable through CORBA, will be active at full scale. Many of these objects have persistent configuration information stored in a database. The configuration data is used to initialize the objects at system start-up. Centralized server programs that implement events, alerts, reservations, data archival, name service, data access, and process management provide common system wide services. At the highest level, a model-driven, distributed shot automation system provides a flexible and scalable framework for automatic sequencing of work-flow for control and monitoring of NIF shots. The shot model, in conjunction with data defining the parameters and goals of an experiment, describes the steps to be performed by each subsystem in order to prepare for and fire a NIF shot. Status and usage of this distributed framework are described.

  15. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  16. Impact of Distributed Energy Resources on the Reliability of a Critical Telecommunications Facility

    SciTech Connect

    Robinson, D.; Atcitty, C.; Zuffranieri, J.; Arent, D.

    2006-03-01

    Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages, as documented by analyses of Federal Communications Commission (FCC) outage reports by the National Reliability Steering Committee (under auspices of the Alliance for Telecommunications Industry Solutions). There are two major issues that are having increasing impact on the sensitivity of the power distribution to telecommunication facilities: deregulation of the power industry, and changing weather patterns. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. But does the diversity in power sources actually increase the reliability of offered power to the office equipment, or does the complexity of installing and managing the extended power system induce more potential faults and higher failure rates? This report analyzes a system involving a telecommunications facility consisting of two switch-bays and a satellite reception system.

  17. Astronaut Thomas Jones anchored to bunk facility while working on computer

    NASA Image and Video Library

    1994-04-14

    STS059-10-011 (9-20 April 1994) --- Astronaut Thomas D. Jones appears to have climbed out of bed right into his work in this onboard 35mm frame. Actually, Jones had anchored himself in the bunk facility while working on one of the onboard computers which transfered data to the ground via modem. The mission specialist was joined in space by five other NASA astronauts for a week and a half of support to the Space Radar Laboratory (SRL-1)/STS-59 mission.

  18. Computer Modeling of Crystallization and Crystal Size distributions

    NASA Astrophysics Data System (ADS)

    Amenta, R. V.

    2002-05-01

    The crystal size distribution of an igneous rock has been shown to be related to the crystallization kinetics. In order to better understand crystallization processes, the nucleation and growth of crystals in a closed system is modeled computationally and graphically. Units of volume analogous to unit cells are systematically attached to stationary crystal nuclei. The number of volume units attached to each crystal per growth stage is proportional to the crystal size insuring that crystal dimensional growth rates are constant regardless of their size. The number of new crystal nuclei per total system volume that form in each growth stage increases exponentially Cumulative crystal size distributions (CCSD) are determined for various stages of crystallization (30 percent, 60 pct, etc) from a database generated by the computer model, and each distribution is fit to an exponential function of the same form. Simulation results show that CCSD functions appear to fit the data reasonably well (R-square) with the greatest misfit at 100 pct crystallization. The crystal size distribution at each pct crystallization can be obtained from the derivative of the respective CCSD function. The log form of each crystal size distribution (CSD) is a linear function with negative slope. Results show that the slopes of the CSD functions at pcts crystallization up to 90 pct are parallel, but the slope at 100 pct crystallization differs from the others although still in approximate alignment. We suggest that real crystallization of igneous rocks may show this pattern. In the early stages of crystallization crystals are far apart and CSD's are ideal as predicted by theory based on growth of crystals in a brine. At advanced stages of crystallization growth collision boundaries develop between crystals. As contiguity increases crystals become blocked and inactive because they can no longer grow. As crystallization approaches 100 pct a significant number of inactive crystals exist resulting in

  19. Distributed computations in a dynamic, heterogeneous Grid environment

    NASA Astrophysics Data System (ADS)

    Dramlitsch, Thomas

    2003-06-01

    In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and

  20. Classification of bacterial contamination using image processing and distributed computing.

    PubMed

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  1. Impact of Distributed Energy Resources on the Reliability of Critical Telecommunications Facilities: Preprint

    SciTech Connect

    Robinson, D. G.; Arent, D. J.; Johnson, L.

    2006-06-01

    This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.

  2. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX.

    SciTech Connect

    Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division

    2009-06-09

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the

  3. Execution of the SimSET Monte Carlo PET/SPECT simulator in the condor distributed computing environment.

    PubMed

    Baum, Karl G; Helguera, María

    2007-11-01

    SimSET is a package for simulation of emission tomography data sets. Condor is a popular distributed computing environment. Simple C/C++ applications and shell scripts are presented which allow the execution of SimSET on the Condor environment. This is accomplished without any modification to SimSET by executing multiple instances and using its combinebin utility. This enables research facilities without dedicated parallel computing systems to utilize the idle cycles of desktop workstations to greatly reduce the run times of their SimSET simulations. The necessary steps to implement this approach in other environments are presented along with sample results.

  4. Computer-based data acquisition system in the Large Coil Test Facility

    SciTech Connect

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system.

  5. Environmental audits of exploration and production facilities using pen-based computers and data management systems

    SciTech Connect

    Molloy, K.P.; Shilland, P.J.

    1997-12-31

    Rapid collection and management of quality field data are the foundation for any environmental investigation. Recent developments in pen-computing technologies have made it possible to record field data in electronic format using field-ruggedized pen-based computers without the need for paper forms or log books. For this project, a field data entry application was constructed in Microsoft Access, a relational database management software system. The data files are collected directly into the data management systems in the field. The database management system allows for further characterization or reporting activities. Efficient collection, storage, and management improves the efficiency and overall quality of the characterization and presentation of the project data. Furthermore, the data can also be electronically transferred to other analytical software packages without the need to manually re-enter data. The electronic files can be used from the initial investigation phase through entire facilities management programs or internal client programs.

  6. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  7. Evaluation of Secure Computation in a Distributed Healthcare Setting.

    PubMed

    Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki

    2016-01-01

    Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use.

  8. Distributed data access in the LAMPF (Los Alamos Meson Physics Facility) control system

    SciTech Connect

    Schaller, S.C.; Bjorklund, E.A.

    1987-01-01

    We have extended the Los Alamos Meson Physics Facility (LAMPF) control system software to allow uniform access to data and controls throughout the control system network. Two aspects of this work are discussed here. Of primary interest is the use of standard interfaces and standard messages to allow uniform and easily expandable inter-node communication. A locally designed remote procedure call protocol will be described. Of further interest is the use of distributed databases to allow maximal hardware independence in the controls software. Application programs use local partial copies of the global device description database to resolve symbolic device names.

  9. Simulation Study of Gyrotron Traveling Wave Amplifier with Distributed-Loss in Facilities for Aquaculture

    NASA Astrophysics Data System (ADS)

    Hua, Xufeng; Chen, Chengxun; Xu, Dawei; Xing, Kezhi

    In this paper, we introduce a W-band gyro-traveling wave (gyro-TWT) tube to suppress spurious oscillations, which is used in facilities for Aquaculture. The spurious oscillations block achieving high gain and high power in high-power broadband millimeter amplifier. In order to suppress spurious oscillations, we study the interaction circuit including input coupler and output section, design the gyro-traveling wave tube with distributed-loss based on MIG. The results of simulation show that high power and broad-band capabilities.

  10. Advanced distribution, switching, and conversion technology for fluids/combustion facility electric power control

    NASA Astrophysics Data System (ADS)

    Poljak, Mark D.; Soltis, James V.; Fox, David A.

    1997-01-01

    The Electrical Power Control Unit (EPCU) under development for use in the Fluids/Combustion Facility (FCF) on International Space Station (ISS) is the precursor of modular power distribution and conversion concepts for future high power and small spacecraft applications. The EPCU is built from modular, current limiting Flexible Remote Power Controllers (FRPCs) and paralleled power converters packaged into a common orbital replacement unit. Multiple EPCUs are combined at the next higher level of integration to form the three-rack FCF Electrical Power System (EPS). This modular building block approach allows for the quick development of expandable power systems tailored to customer needs.

  11. Comparison of TCP automatic tuning techniques for distributed computing

    SciTech Connect

    Weigle, E. H.; Feng, W. C.

    2002-01-01

    Rather than painful, manual, static, per-connection optimization of TCP buffer sizes simply to achieve acceptable performance for distributed applications, many researchers have proposed techniques to perform this tuning automatically. This paper first discusses the relative merits of the various approaches in theory, and then provides substantial experimental data concerning two competing implementations - the buffer autotuning already present in Linux 2.4.x and 'Dynamic Right-Sizing.' This paper reveals heretofore unknown aspects of the problem and current solutions, provides insight into the proper approach for various circumstances, and points toward ways to further improve performance. TCP, for good or ill, is the only protocol widely available for reliable end-to-end congestion-controlled network communication, and thus it is the one used for almost all distributed computing. Unfortunately, TCP was not designed with high-performance computing in mind - its original design decisions focused on long-term fairness first, with performance a distant second. Thus users must often perform tortuous manual optimizations simply to achieve acceptable behavior. The most important and often most difficult task is determining and setting appropriate buffer sizes. Because of this, at least six ways of automatically setting these sizes have been proposed. In this paper, we compare and contrast these tuning methods. First we explain each method, followed by an in-depth discussion of their features. Next we discuss the experiments to fully characterize two particularly interesting methods (Linux 2.4 autotuning and Dynamic Right-Sizing). We conclude with results and possible improvements.

  12. Stand alone computer system to aid the development of Mirror Fusion Test Facility rf heating systems

    SciTech Connect

    Thomas, R.A.

    1983-12-01

    The Mirror Fusion Test Facility (MFTF-B) control system architecture requires the Supervisory Control and Diagnostic System (SCDS) to communicate with a LSI-11 Local Control Computer (LCC) that in turn communicates via a fiber optic link to CAMAC based control hardware located near the machine. In many cases, the control hardware is very complex and requires a sizable development effort prior to being integrated into the overall MFTF-B system. One such effort was the development of the Electron Cyclotron Resonance Heating (ECRH) system. It became clear that a stand alone computer system was needed to simulate the functions of SCDS. This paper describes the hardware and software necessary to implement the SCDS Simulation Computer (SSC). It consists of a Digital Equipment Corporation (DEC) LSI-11 computer and a Winchester/Floppy disk operating under the DEC RT-11 operating system. All application software for MFTF-B is programmed in PASCAL, which allowed us to adapt procedures originally written for SCDS to the SSC. This nearly identical software interface means that software written during the equipment development will be useful to the SCDS programmers in the integration phase.

  13. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    SciTech Connect

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.

    1999-08-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike.

  14. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  15. Toward unification of taxonomy databases in a distributed computer environment

    SciTech Connect

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomy databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.

  16. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  17. Metabolic flux distributions: genetic information, computational predictions, and experimental validation.

    PubMed

    Blank, Lars M; Kuepfer, Lars

    2010-05-01

    Flux distributions in intracellular metabolic networks are of immense interest to fundamental and applied research, since they are quantitative descriptors of the phenotype and the operational mode of metabolism in the face of external growth conditions. In particular, fluxes are of relevance because they do not belong to the cellular inventory (e.g., transcriptome, proteome, metabolome), but are rather quantitative moieties, which link the phenotype of a cell to the specific metabolic mode of operation. A frequent application of measuring and redirecting intracellular fluxes is strain engineering, which ultimately aims at shifting metabolic activity toward a desired product to achieve a high yield and/or rate. In this article, we first review the assessment of intracellular flux distributions by either qualitative or rather quantitative computational methods and also discuss methods for experimental measurements. The tools at hand will then be exemplified on strain engineering projects from the literature. Finally, the achievements are discussed in the context of future developments in Metabolic Engineering and Synthetic Biology.

  18. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  19. A study of residence time distribution using radiotracer technique in the large scale plant facility

    NASA Astrophysics Data System (ADS)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  20. COMPUTER MODEL OF TEMPERATURE DISTRIBUTION IN OPTICALLY PUMPED LASER RODS

    NASA Technical Reports Server (NTRS)

    Farrukh, U. O.

    1994-01-01

    Managing the thermal energy that accumulates within a solid-state laser material under active pumping is of critical importance in the design of laser systems. Earlier models that calculated the temperature distribution in laser rods were single dimensional and assumed laser rods of infinite length. This program presents a new model which solves the temperature distribution problem for finite dimensional laser rods and calculates both the radial and axial components of temperature distribution in these rods. The modeled rod is either side-pumped or end-pumped by a continuous or a single pulse pump beam. (At the present time, the model cannot handle a multiple pulsed pump source.) The optical axis is assumed to be along the axis of the rod. The program also assumes that it is possible to cool different surfaces of the rod at different rates. The user defines the laser rod material characteristics, determines the types of cooling and pumping to be modeled, and selects the time frame desired via the input file. The program contains several self checking schemes to prevent overwriting memory blocks and to provide simple tracing of information in case of trouble. Output for the program consists of 1) an echo of the input file, 2) diffusion properties, radius and length, and time for each data block, 3) the radial increments from the center of the laser rod to the outer edge of the laser rod, and 4) the axial increments from the front of the laser rod to the other end of the rod. This program was written in Microsoft FORTRAN77 and implemented on a Tandon AT with a 287 math coprocessor. The program can also run on a VAX 750 mini-computer. It has a memory requirement of about 147 KB and was developed in 1989.

  1. Raman distributed temperature measurement at CERN high energy accelerator mixed field radiation test facility (CHARM)

    NASA Astrophysics Data System (ADS)

    Toccafondo, Iacopo; Nannipieri, Tiziano; Signorini, Alessandro; Guillermain, Elisa; Kuhnhenn, Jochen; Brugger, Markus; Di Pasquale, Fabrizio

    2015-09-01

    In this paper we present a validation of distributed Raman temperature sensing (RDTS) at the CERN high energy accelerator mixed field radiation test facility (CHARM), newly developed in order to qualify electronics for the challenging radiation environment of accelerators and connected high energy physics experiments. By investigating the effect of wavelength dependent radiation induced absorption (RIA) on the Raman Stokes and anti-Stokes light components in radiation tolerant Ge-doped multi-mode (MM) graded-index optical fibers, we demonstrate that Raman DTS used in loop configuration is robust to harsh environments in which the fiber is exposed to a mixed radiation field. The temperature profiles measured on commercial Ge-doped optical fibers is fully reliable and therefore, can be used to correct the RIA temperature dependence in distributed radiation sensing systems based on P-doped optical fibers.

  2. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    SciTech Connect

    Travis, J.R. ); Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F. )

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data.

  3. Spatially Resolved Temperature and Water Vapor Concentration Distributions in Supersonic Combustion Facilities by TDLAT

    NASA Technical Reports Server (NTRS)

    Busa, K. M.; McDaniel J. C.; Diskin, G. S.; DePiro, M. J.; Capriotti, D. P.; Gaffney, R. L.

    2012-01-01

    Detailed knowledge of the internal structure of high-enthalpy flows can provide valuable insight to the performance of scramjet combustors. Tunable Diode Laser Absorption Spectroscopy (TDLAS) is often employed to measure temperature and species concentration. However, TDLAS is a path-integrated line-of-sight (LOS) measurement, and thus does not produce spatially resolved distributions. Tunable Diode Laser Absorption Tomography (TDLAT) is a non-intrusive measurement technique for determining two-dimensional spatially resolved distributions of temperature and species concentration in high enthalpy flows. TDLAT combines TDLAS with tomographic image reconstruction. More than 2500 separate line-of-sight TDLAS measurements are analyzed in order to produce highly resolved temperature and species concentration distributions. Measurements have been collected at the University of Virginia's Supersonic Combustion Facility (UVaSCF) as well as at the NASA Langley Direct-Connect Supersonic Combustion Test Facility (DCSCTF). Due to the UVaSCF s unique electrical heating and ability for vitiate addition, measurements collected at the UVaSCF are presented as a calibration of the technique. Measurements collected at the DCSCTF required significant modifications to system hardware and software designs due to its larger measurement area and shorter test duration. Tomographic temperature and water vapor concentration distributions are presented from experimentation on the UVaSCF operating at a high temperature non-reacting case for water vitiation level of 12%. Initial LOS measurements from the NASA Langley DCSCTF operating at an equivalence ratio of 0.5 are also presented. Results show the capability of TDLAT to adapt to several experimental setups and test parameters.

  4. Availability and distribution of safe abortion services in rural areas: a facility assessment study in Madhya Pradesh, India

    PubMed Central

    Chaturvedi, Sarika; Ali, Sayyed; Randive, Bharat; Sabde, Yogesh; Diwan, Vishal; De Costa, Ayesha

    2015-01-01

    Background Unsafe abortion contributes to a significant portion of maternal mortality in India. Access to safe abortion care is known to reduce maternal mortality. Availability and distribution of abortion care facilities can influence women's access to these services, especially in rural areas. Objectives To assess the availability and distribution of abortion care at facilities providing childbirth care in three districts of Madhya Pradesh (MP) province of India. Design Three socio demographically heterogeneous districts of MP were selected for this study. Facilities conducting at least 10 deliveries a month were surveyed to assess availability and provision of abortion services using UN signal functions for emergency obstetric care. Geographical Information System was used for visualisation of the distribution of facilities. Results The three districts had 99 facilities that conducted >10 deliveries a month: 74 in public and 25 in private sector. Overall, 48% of facilities reported an ability to provide safe surgical abortion service. Of public centres, 32% reported the ability compared to 100% among private centres while 18% of public centres and 77% of private centres had performed an abortion in the last 3 months. The availability of abortion services was higher at higher facility levels with better equipped and skilled personnel availability, in urban areas and in private sector facilities. Conclusions Findings showed that availability of safe abortion care is limited especially in rural areas. More emphasis on providing safe abortion services, particularly at primary care level, is important to more significantly dent maternal mortality in India. PMID:25797220

  5. Assessment of the Distribution of Toxic Release Inventory Facilities in Metropolitan Charleston: An Environmental Justice Case Study

    PubMed Central

    Fraser-Rahim, Herb; Williams, Edith; Zhang, Hongmei; Rice, LaShanta; Svendsen, Erik; Abara, Winston

    2012-01-01

    Objectives. We assessed spatial disparities in the distribution of Toxic Release Inventory (TRI) facilities in Charleston, SC. Methods. We used spatial methods and regression to assess burden disparities in the study area at the block and census-tract levels by race/ethnicity and socioeconomic status (SES). Results. Results revealed an inverse relationship between distance to TRI facilities and race/ethnicity and SES at the block and census-tract levels. Results of regression analyses showed a positive association between presence of TRI facilities and high percentage non-White and a negative association between number of TRI facilities and high SES. Conclusions. There are burden disparities in the distribution of TRI facilities in Charleston at the block and census-tract level by race/ethnicity and SES. Additional research is needed to understand cumulative risk in the region. PMID:22897529

  6. Assessment of the distribution of toxic release inventory facilities in metropolitan Charleston: an environmental justice case study.

    PubMed

    Wilson, Sacoby M; Fraser-Rahim, Herb; Williams, Edith; Zhang, Hongmei; Rice, LaShanta; Svendsen, Erik; Abara, Winston

    2012-10-01

    We assessed spatial disparities in the distribution of Toxic Release Inventory (TRI) facilities in Charleston, SC. We used spatial methods and regression to assess burden disparities in the study area at the block and census-tract levels by race/ethnicity and socioeconomic status (SES). Results revealed an inverse relationship between distance to TRI facilities and race/ethnicity and SES at the block and census-tract levels. Results of regression analyses showed a positive association between presence of TRI facilities and high percentage non-White and a negative association between number of TRI facilities and high SES. There are burden disparities in the distribution of TRI facilities in Charleston at the block and census-tract level by race/ethnicity and SES. Additional research is needed to understand cumulative risk in the region.

  7. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  8. Achieving production-level use of HEP software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.

    2015-12-01

    HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.

  9. How to Reduce Computational Time in Distributed Hydrological Modeling?

    NASA Astrophysics Data System (ADS)

    Khan, U.; Tuteja, N. K.; Ajami, H.; Sharma, A.

    2012-12-01

    One of the key limitations of distributed hydrologic modeling for large scale simulations of soil moisture and land surface fluxes is the computational time spent in simulating hydrological processes. It is for this reason that applications involving assessment of model uncertainty, or simulating multiple input realizations as often needed to assess climate change impacts on a catchment, are not attempted, and models applied to understand hydrological processes in small sized, experimental catchments. The questions asked in this presentation are (a) whether one can simulate the catchment hydrology by simulating across multiple cross sections in a hillslope ; and (b) can one improve these simulations further by simulating on a single (or selected few) "Equivalent" cross-sections in the catchment. This new concept of an Equivalent Cross-section informed by the catchment landform is developed for upland catchments, to reduce computational time while maintaining the same order of accuracy in simulating hydrologic fluxes. The Unsaturated Soil Moisture Movement model (U3M-2d), based on a 2-dimensional solution of the Richards' equation, is used to simulate hydrologic fluxes. In this method, simulations with U3M-2d are first done for a number of uniformly spaced cross-sections in each Strahler's first order sub-basin and the total fluxes are estimated (reference case). Single or multiple Equivalent Cross-sections are then derived for each Strahler's first order sub-basin and results are compared against the reference case. To formulate the Equivalent Cross-section, the catchment is divided into four major landforms using the methodology developed by Khan et al. [2009] and then a range of weighting schemes for topographic variables and soil types are investigated. The Equivalent Cross-section approach is investigated for seven first order sub-basins of McLaughlin catchment of Snowy River and Wagga Wagga experimental catchment of NSW, Australia. Simulated fluxes by the

  10. Maintaining Traceability in an Evolving Distributed Computing Environment

    NASA Astrophysics Data System (ADS)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  11. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  12. Protein folding by distributed computing and the denatured state ensemble.

    PubMed

    Marianayagam, Neelan J; Fawzi, Nicolas L; Head-Gordon, Teresa

    2005-11-15

    The distributed computing (DC) paradigm in conjunction with the folding@home (FH) client server has been used to study the folding kinetics of small peptides and proteins, giving excellent agreement with experimentally measured folding rates, although pathways sampled in these simulations are not always consistent with the folding mechanism. In this study, we use a coarse-grain model of protein L, whose two-state kinetics have been characterized in detail by using long-time equilibrium simulations, to rigorously test a FH protocol using approximately 10,000 short-time, uncoupled folding simulations starting from an extended state of the protein. We show that the FH results give non-Poisson distributions and early folding events that are unphysical, whereas longer folding events experience a correct barrier to folding but are not representative of the equilibrium folding ensemble. Using short-time, uncoupled folding simulations started from an equilibrated denatured state ensemble (DSE), we also do not get agreement with the equilibrium two-state kinetics because of overrepresented folding events arising from higher energy subpopulations in the DSE. The DC approach using uncoupled short trajectories can make contact with traditionally measured experimental rates and folding mechanism when starting from an equilibrated DSE, when the simulation time is long enough to sample the lowest energy states of the unfolded basin and the simulated free-energy surface is correct. However, the DC paradigm, together with faster time-resolved and single-molecule experiments, can also reveal the breakdown in the two-state approximation due to observation of folding events from higher energy subpopulations in the DSE.

  13. Use of the Web by a Distributed Research group Performing Distributed Computing

    NASA Astrophysics Data System (ADS)

    Burke, David A.; Peterkin, Robert E.

    2001-06-01

    A distributed research group that uses distributed computers faces a spectrum of challenges--some of which can be met by using various electronic means of communication. The particular challenge of our group involves three physically separated research entities. We have had to link two collaborating groups at AFRL and NRL together for software development, and the same AFRL group with a LANL group for software applications. We are developing and using a pair of general-purpose, portable, parallel, unsteady, plasma physics simulation codes. The first collaboration is centered around a formal weekly video teleconference on relatively inexpensive equipment that we have set up in convenient locations in our respective laboratories. The formal virtual meetings are augmented with informal virtual meetings as the need arises. Both collaborations share research data in a variety of forms on a secure URL that is set up behind the firewall at the AFRL. Of course, a computer-generated animation is a particularly efficient way of displaying results from time-dependent numerical simulations, so we generally like to post such animations (along with proper documentation) on our web page. In this presentation, we will discuss some of our accomplishments and disappointments.

  14. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    PubMed

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  15. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  16. The 3-D General Geometry PIC Software for Distributed Memory MIMD Computers; EM Software Specification

    DTIC Science & Technology

    1994-09-01

    GENERAL GEOMETRY PIC SOFTWARE FOR DISTRIBUTED MEMORY MIMD COMPUTERS : TASK 1 FINAL REPORT J W Eastwood, W... GENERAL GEOMETRY PIC SOFTWARE FOR DISTRIBUTED MEMORY MIMD COMPUTERS : TASK 1 FINAL REPORT J W Eastwood, W Arter, N J Brealey, R W Hockney September 1994... General geometry PIC for MIMD computers : Final report . Report RFFX(93)56,

  17. [Elderlies in street situation or social vulnerability: facilities and difficulties in the use of computational tools].

    PubMed

    Frias, Marcos Antonio da Eira; Peres, Heloisa Helena Ciqueto; Pereira, Valclei Aparecida Gandolpho; Negreiros, Maria Célia de; Paranhos, Wana Yeda; Leite, Maria Madalena Januário

    2014-01-01

    This study aimed to identify the advantages and difficulties encountered by older people living on the streets or social vulnerability, to use the computer or internet. It is an exploratory qualitative research, in which five elderlies, attended on a non-governmental organization located in the city of São Paulo, have participated. The discourses were analyzed by content analysis technique and showed, as facilities, among others, to clarify doubts with the monitors, the stimulus for new discoveries coupled with proactivity and curiosity, and develop new skills. The mentioned difficulties were related to physical or cognitive issues, lack of instructor, and lack of knowledge to interact with the machine. The studies focusing on the elderly population living on the streets or in social vulnerability may contribute with evidence to guide the formulation of public policies to this population.

  18. Characterizing W-2 SLSF experiment temperature oscillations using computer graphics. [Sodium Loop Safety Facility

    SciTech Connect

    Smith, D.E.

    1983-06-23

    The W-2 SLSF (Sodium Loop Safety Facility) experiment was an instrumented in-reactor test performed to characterize the failure response of full-length, preconditioned LMFBR prototypic fuel pins to slow transient overpower (TOP) conditions. Although the test results were expected to confirm analytical predictions of upper level failure and fuel expulsion, an axial midplane failure was experienced. Extensive post-test analyses were conducted to understand all of the unexpected behavior in the experiment. (1) The initial post-test effort focused on the temperature oscillations recorded by the 54 thermocouples used in the experiment. In order to synthesize the extensive data records and identify patterns of behavior in the data records, a computer-generated film was used to present the temperature data recorded during the experiment.

  19. Facility Microgrids

    SciTech Connect

    Ye, Z.; Walling, R.; Miller, N.; Du, P.; Nelson, K.

    2005-05-01

    Microgrids are receiving a considerable interest from the power industry, partly because their business and technical structure shows promise as a means of taking full advantage of distributed generation. This report investigates three issues associated with facility microgrids: (1) Multiple-distributed generation facility microgrids' unintentional islanding protection, (2) Facility microgrids' response to bulk grid disturbances, and (3) Facility microgrids' intentional islanding.

  20. A design study for the upgraded ALICE O2 computing facility

    NASA Astrophysics Data System (ADS)

    Richter, Matthias

    2015-12-01

    An upgrade of the ALICE detector is currently prepared for the Run 3 period of the Large Hadron Collider (LHC) at CERN starting in 2020. The physics topics under study by ALICE during this period will require the inspection of all collisions at a rate of 50 kHz for minimum bias Pb-Pb and 200 kHz for pp and p-Pb collisions in order to extract physics signals embedded into a large background. The upgraded ALICE detector will produce more than 1 TByte/s of data. Both collision and data rate impose new challenges onto the detector readout and compute system. Some detectors will not use a triggered readout, which will require a continuous processing of the detector data. The challenging requirements will be met by a combined online and offline facility developed and managed by the ALICE O2 project. The combined facility will accommodate the necessary substantial increase of data taking rate. In this paper we present first results of a prototype with estimates for scalability and feasibility for a full scale system.

  1. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times

  2. An Evaluation of Biosurveillance Grid—Dynamic Algorithm Distribution Across Multiple Computer Nodes

    PubMed Central

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M.

    2007-01-01

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network. PMID:18693936

  3. An evaluation of biosurveillance grid--dynamic algorithm distribution across multiple computer nodes.

    PubMed

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M

    2007-10-11

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network.

  4. Computational spectroscopy using the Quantum ESPRESSO distribution (Invited)

    NASA Astrophysics Data System (ADS)

    Baroni, S.; Giannozzi, P.

    2009-12-01

    Quantum ESPRESSO (QE) [1,2] is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials. QE freely available to researchers around the world under the terms of the GNU general public licence. In this talk I will introduce the QE distribution, with emphasis on some of its features that may appeal to the Earth Sciences and Mineralogy communities. I will focus on the determination of vibrational frequencies to be used for spectroscopic purposes, for the determination of soft modes leading to mechanical instabilities, and as ingredients for the simulation of thermal properties in the (quasi-) harmonic approximations. I will conclude with some recent developments which are allowing for the simulation of electronic (absorption and photo-emission) spectroscopies, using many-body and time-dependent density-functional perturbation theories. [1] P. Giannozzi et al. J. Phys.: Condens. Matter 21, 395502 (2009); http://dx.doi.org/10.1088/0953-8984/21/39/395502 [2] http://www.quantum-espresso.org

  5. Parallelizing Sylvester-like operations on a distributed memory computer

    SciTech Connect

    Hu, D.Y.; Sorensen, D.C.

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  6. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  7. Distributed Sensor Network With Collective Computation For Situational Awareness

    NASA Astrophysics Data System (ADS)

    Dreicer, Jared S.; Jorgensen, Anders M.; Dors, Eric E.

    2002-10-01

    Initiated under Laboratory Directed R&D funding we have engaged in empirical studies, theory development, and initial hardware development for a ground-based Distributed Sensor Network with Collective Computation (DSN-CC). A DSN-CC is a network that uses node-to-node communication and on-board processing to achieve gains in response time, power usage, communication bandwidth, detection resolution, and robustness. DSN-CCs are applicable to both military and civilian problems where massive amounts of data gathered over a large area must be processed to yield timely conclusions. We have built prototype hardware DSN-CC nodes. Each node has self-contained power and is 6"×10"×2". Each node contains a battery pack with power feed from a solar panel that forms the lid, a central processing board, a GPS card, and radio card. Further system properties will be discussed, as will scenarios in which the system might be used to counter Nuclear/Biological/Chemical (NBC) threats of unconventional warfare. Mid-year in FY02 this DSN-CC research project received funding from the Office of Nonproliferation Research and Engineering (NA-22), NNSA to support nuclear proliferation technology development.

  8. Automatic distribution of vision-tasks on computing clusters

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Tran, Binh An; Knoll, Alois

    2011-01-01

    In this paper a consistent and efficient but yet convenient system for parallel computer vision, and in fact also realtime actuator control is proposed. The system implements the multi-agent paradigm and a blackboard information storage. This, in combination with a generic interface for hardware abstraction and integration of external software components, is setup on basis of the message passing interface (MPI). The system allows for data- and task-parallel processing, and supports both synchronous communication, as data exchange can be triggered by events, and asynchronous communication, as data can be polled, strategies. Also, by duplication of processing units (agents) redundant processing is possible to achieve greater robustness. As the system automatically distributes the task units to available resources, and a monitoring concept allows for combination of tasks and their composition to complex processes, it is easy to develop efficient parallel vision / robotics applications quickly. Multiple vision based applications have already been implemented, including academic, research related fields and prototypes for industrial automation. For the scientific community the system has been recently launched open-source.

  9. Client/server models for transparent, distributed computational resources

    SciTech Connect

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs.

  10. Distributed Computing Over New Technology Networks: Quality of Service for CORBA Objects.

    DTIC Science & Technology

    1996-10-01

    technology. This was accomplished in four (largely sequential) steps: (1) Study the impact of new technology networks on distributed computing environments...distributed programs such as C3 or collaborative planning applications; (2) Study how Distributed Computing Environments (DCEs) should support QoS in

  11. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  12. Distribution of radon concentrations in child-care facilities in South Korea.

    PubMed

    Lee, Cheol-Min; Kwon, Myung-Hee; Kang, Dae-Ryong; Park, Tae-Hyun; Park, Si-Hyun; Kwak, Jung-Eun

    2017-02-01

    This study was conducted to provide fundamental data on the distribution of radon concentrations in child day-care facilities in South Korea and to help establish radon mitigation strategies. For this study, 230 child-care centers were randomly chosen from all child-care centers nationwide, and alpha track detectors were used to examine cumulative radon exposure concentrations from January to May 2015. The mean radon concentration measured in Korean child-care centers is approximately 52 Bq m(-3), about one-third of the upper limit of 148 Bq m(-3), which is recommended by South Korea's Indoor Air Quality Control in Public Use Facilities, etc. Act and the U.S. Environmental Protection Agency (EPA). Furthermore, this concentration is about 50% lower than 102 Bq m(-3), which is the measured concentration of radon in houses nationwide from December 2013 to February 2014. Our results indicate that the amount of ventilation, as a major determining factor for indoor radon concentrations, is strongly correlated with the fluctuation of indoor radon concentrations in Korean child-care centers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Impact of distributed energy resources on the reliability of a critical telecommunications facility.

    SciTech Connect

    Robinson, David; Zuffranieri, Jason V.; Atcitty, Christopher B.; Arent, Douglas

    2006-03-01

    This report documents a probabilistic risk assessment of an existing power supply system at a large telecommunications office. The focus is on characterizing the increase in the reliability of power supply through the use of two alternative power configurations. Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.

  14. Radon gas distribution in natural gas processing facilities and workplace air environment.

    PubMed

    Al-Masri, M S; Shwiekani, R

    2008-04-01

    Evaluation was made of the distribution of radon gas and radiation exposure rates in the four main natural gas treatment facilities in Syria. The results showed that radiation exposure rates at contact of all equipment were within the natural levels (0.09-0.1 microSvh(-1)) except for the reflex pumps where a dose rate value of 3 microSvh(-1) was recorded. Radon concentrations in Syrian natural gas varied between 15.4 Bq m(-3) and 1141 Bq m(-3); natural gas associated with oil production was found to contain higher concentrations than the non-associated natural gas. In addition, radon concentrations were higher in the central processing facilities than the wellheads; these high levels are due to pressurizing and concentrating processes that enhance radon gas and its decay products. Moreover, the lowest 222Rn concentration was in the natural gas fraction used for producing sulfur; a value of 80 Bq m(-3) was observed. On the other hand, maximum radon gas and its decay product concentrations in workplace air environments were found to be relatively high in the gas analysis laboratories; a value of 458 Bq m(-3) was observed. However, all reported levels in the workplaces in the four main stations were below the action level set by IAEA for chronic exposure situations involving radon, which is 1000 Bq m(-3).

  15. First thoughts on KM3NeT on-shore data storage and distribution facilities

    NASA Astrophysics Data System (ADS)

    Stavrianakou, M.

    2009-04-01

    The KM3NeT project studies the design of an underwater neutrino telescope combined with a multidisciplinary underwater observatory in the Mediterranean. Data from the telescope will arrive on shore where they will be processed in real time at a data filter farm and subsequently stored and backed up at a central computing centre located on site. From there we propose a system whereby the data are distributed to participating institutes equipped with large computing centres for further processing, duplication and distribution to smaller centres. The data taking site hosts the central data management services, including the database servers, bookkeeping systems and file catalogue services, the data access and file transfer systems, data quality monitoring systems and transaction monitoring daemons and is equipped with fast network connection to all large computing sites. Data and service challenges in the course of the preparatory phase must be anticipated in order to test the hardware and software components in terms of robustness and performance, scalability as well as modularity and replaceability, given the rapid evolution of the market both in terms of CPU performance and storage capacity. The role of the GRID would also have to be evaluated and the appropriate implementation selected on time for an eventual test in the context of a data challenge before the start of data taking.

  16. Managing to Change: The Wharton School's Distributed Staff Model for Computing Support.

    ERIC Educational Resources Information Center

    Eleey, Michael

    1993-01-01

    The University of Pennsylvania's Wharton School introduced a "distributed" organization for managing computing support services. The hybrid structure combined elements of centralized computing and departmental computing by placing computing personnel in the departments, under central management. The program covers a wide range of support…

  17. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  18. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  19. Overview of a system for the computer-assisted operation of a small animal inhalation facility.

    PubMed Central

    Van Stee, E W; Moorman, M P

    1984-01-01

    Automatic monitoring of the concentration of test gases and other environmental variables in small animal inhalation exposure chambers, coupled with computing capability and feedback control of the concentration of test gas, allows almost fully automatic operation of the chambers with a minimal amount of human intervention. Time-varying exposure profiles may be generated repeatedly with great accuracy, thus allowing a more realistic simulation of real-life exposures than is approached by operating chambers manually at ostensibly constant concentrations of test gases. Carefully conducted, pre-experimental calibration procedures are performed, and daily calibration checks allow statistical control of daily chamber operation and longer term quality control. At the conclusion of each experiment the investigator is supplied with records that document chamber conditions that have been monitored throughout the entire experiment, with estimates of the accuracy that was achieved in creating the specified exposure profile. A purpose of this report is to help to bridge the gap between the practicing inhalation toxicologist and the engineer in order to encourage their cooperation and mutual understanding of the technical problems involved in developing computer-assistance packages for inhalation facilities. Images FIGURE 4. FIGURE 7. FIGURE 9. PMID:6734564

  20. Chemical fate and transport of atrazine in soil gravel materials at agrichemical distribution facilities

    USGS Publications Warehouse

    Roy, W.R.; Krapac, I.G.; Chou, S.-F.J.

    1999-01-01

    The gravel commonly used to cover parking lots and roadways at retail agrichemical facilities may contain relatively large concentrations of pesticides that resulted from past management problems. These pesticides may threaten groundwater quality. Previous studies, however, suggested that the pesticides had not moved from the gravel in several sample profiles. Excavations at a closed facility revealed tremendous variability in pesticide distribution within the site. Pesticides were present below the gravel in two profiles, but the mechanism(s) for their movement were not clear. The objectives of this study were to investigate how the physical and chemical properties of the gravel influence the environmental fate of atrazine. All of the gravel samples collected and characterized contained atrazine and sufficient organic C to adsorb significant amounts of atrazine, thus retarding its movement through the gravel. Laboratory column leaching experiments, however, suggested that much of the atrazine should leach from the gravel within a year or two. A field-scale test plot was constructed to study how atrazine moves through the gravel under controlled conditions. Atrazine was "spilled" in the test plot. Atrazine moved from the gravel both vertically and horizontally. It appears that formulated product spilled on gravel will leach. A single discrete spill can give rise to phantom spills whose occurrence and distribution is not related to any specific pesticide-management practice. The apparent lack of atrazine leaching from gravel appeared to be a transient phenomenon and/or the result of sampling limitations in previous studies. The contaminated gravel clearly poses a risk to groundwater quality.

  1. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  2. Autonomous management of distributed information systems using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Oates, Martin J.

    1999-03-01

    As the size of typical industrial strength information systems continues to rise, particularly in the arena of Internet based management information systems and multimedia servers, the issue of managing data distribution over clusters or `farms' to overcome performance and scalability issues is becoming of paramount importance. Further, where access is global, this can cause points of geographically localized load contention to `follow the sun' during the day. Traditional site mirroring is not overly effective in addressing this contention and so a more dynamic approach is being investigated to tackle load balancing. The general objective is to manage a self-adapting, distributed database so as to reliably and consistently provide near optimal performance as perceived by client applications. Such a management system must be ultimately capable of operating over a range of time varying usage profiles and fault scenarios, incorporate considerations for communications network delays, multiple updates and maintenance operations. It must also be shown to be capable of being scaled in a practical fashion to ever larger sized networks and databases. Two key components of such an automated system are an optimiser capable of efficiently finding new configuration options, and a suitable model of the system capable of accurately reflecting the performance (or any other required quality of service metric) of the real world system. As conditions change in the real world system, these are fed into the model. The optimiser is then run to find new configurations which are tested in the model prior to implementation in the real world. The model therefore forms an evaluation function which the optimiser utilises to direct its search. Whilst it has already been shown that Genetic Algorithms can provide good solutions to this problem, there are a number of issues associated with this approach. In particular, for industrial strength applications, it must be shown that the GA employed

  3. A techno-economic analysis of using mobile distributed pyrolysis facilities to deliver a forest residue resource.

    PubMed

    Brown, Duncan; Rowe, Andrew; Wild, Peter

    2013-12-01

    Distributed mobile conversion facilities using either fast pyrolysis or torrefaction processes can be used to convert forest residues to more energy dense substances (bio-oil, bio-slurry or torrefied wood) that can be transported as feedstock for bio-fuel facilities. Results show that the levelised delivered cost of a forest residue resource using mobile facility networks can be lower than using conventional woodchip delivery methods under appropriate conditions. Torrefied wood is the lowest cost pathway of delivering a forest residue resource when using mobile facilities. Cost savings occur against woodchip delivery for annual forest residue harvests above 2.5 million m(3) or when transport distances greater than 300 km are required. Important parameters that influence levelised delivered costs are transport distances (forest residue spatial density), haul cost factors, and initial moisture content of forest residues. Relocating mobile facilities can be optimised for lowest cost delivery as transport distances of raw biomass are reduced.

  4. Sociospatial distribution of access to facilities for moderate and vigorous intensity physical activity in Scotland by different modes of transport

    PubMed Central

    2012-01-01

    Background People living in neighbourhoods of lower socioeconomic status have been shown to have higher rates of obesity and a lower likelihood of meeting physical activity recommendations than their more affluent counterparts. This study examines the sociospatial distribution of access to facilities for moderate or vigorous intensity physical activity in Scotland and whether such access differs by the mode of transport available and by Urban Rural Classification. Methods A database of all fixed physical activity facilities was obtained from the national agency for sport in Scotland. Facilities were categorised into light, moderate and vigorous intensity activity groupings before being mapped. Transport networks were created to assess the number of each type of facility accessible from the population weighted centroid of each small area in Scotland on foot, by bicycle, by car and by bus. Multilevel modelling was used to investigate the distribution of the number of accessible facilities by small area deprivation within urban, small town and rural areas separately, adjusting for population size and local authority. Results Prior to adjustment for Urban Rural Classification and local authority, the median number of accessible facilities for moderate or vigorous intensity activity increased with increasing deprivation from the most affluent or second most affluent quintile to the most deprived for all modes of transport. However, after adjustment, the modelling results suggest that those in more affluent areas have significantly higher access to moderate and vigorous intensity facilities by car than those living in more deprived areas. Conclusions The sociospatial distributions of access to facilities for both moderate intensity and vigorous intensity physical activity were similar. However, the results suggest that those living in the most affluent neighbourhoods have poorer access to facilities of either type that can be reached on foot, by bicycle or by bus than

  5. Impact of Nitrification on the Formation of N-Nitrosamines and Halogenated Disinfection Byproducts within Distribution System Storage Facilities.

    PubMed

    Zeng, Teng; Mitch, William A

    2016-03-15

    Distribution system storage facilities are a critical, yet often overlooked, component of the urban water infrastructure. This study showed elevated concentrations of N-nitrosodimethylamine (NDMA), total N-nitrosamines (TONO), regulated trihalomethanes (THMs) and haloacetic acids (HAAs), 1,1-dichloropropanone (1,1-DCP), trichloroacetaldehyde (TCAL), haloacetonitriles (HANs), and haloacetamides (HAMs) in waters with ongoing nitrification as compared to non-nitrifying waters in storage facilities within five different chloraminated drinking water distribution systems. The concentrations of NDMA, TONO, HANs, and HAMs in the nitrifying waters further increased upon application of simulated distribution system chloramination. The addition of a nitrifying biofilm sample collected from a nitrifying facility to its non-nitrifying influent water led to increases in N-nitrosamine and halogenated DBP formation, suggesting the release of precursors from nitrifying biofilms. Periodic treatment of two nitrifying facilities with breakpoint chlorination (BPC) temporarily suppressed nitrification and reduced precursor levels for N-nitrosamines, HANs, and HAMs, as reflected by lower concentrations of these DBPs measured after re-establishment of a chloramine residual within the facilities than prior to the BPC treatment. However, BPC promoted the formation of halogenated DBPs while a free chlorine residual was maintained. Strategies that minimize application of free chlorine while preventing nitrification are needed to control DBP precursor release in storage facilities.

  6. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  7. DISTRIBUTION OF LEGIONELLA PNEUMOPHILA SEROGROUPS ISOLATED FROM WATER SYSTEMS OF PUBLIC FACILITIES IN BUSAN, SOUTH KOREA.

    PubMed

    Hwang, In-Yeong; Park, Eun-Hee; Park, Yon-Koung; Park, Sun-Hee; Sung, Gyung-Hye; Park, Hye-Young; Lee, Young-Choon

    2016-05-01

    Legionella pneumophila is the major causes of legionellosis worldwide. The distribution of L. pneumophila was investigated in water systems of public facilities in Busan, South Korea during 2007 and 2013-2014. L. pneumophila was isolated from 8.3% of 3,055 samples, of which the highest isolation rate (49%) was from ships and the lowest 4% from fountains. Serogroups of L. pneumophila isolated in 2007 were distributed among serogroups (sgs) 1-7 with the exception of sg 4, while those of isolates during 2013 and 2014 included also 11 sgs ( 1, 2, 3, 4, 5, 6, 7, 8, 12, 13, 15). L. pneumophila sg 1 was predominated among isolates from fountains (75%), hotels (60%), buildings (44%), hospitals (38%), and public baths (37%), whereas sg 3 and sg 7 was the most prevalent from ships (46%) and factories (40%), respectively. The predominated serogroup of L. pneumophila isolates from hot and cooling tower water was sg 1 (35% and 46%, respectively), while from cold water was sg 3 (29%). These results should be useful for epidemiological surveys to identify sources of outbreaks of legionellosis in Busan, South Korea.

  8. Sentinel-1 Data System at the Alaska Satellite Facility Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Wolf, V. G.

    2014-12-01

    The Alaska Satellite Facility Distributed Active Archive Center (ASF DAAC) has a long history of supporting international collaborations between NASA and foreign flight agencies to promote access to Synthetic Aperture Radar (SAR) data for US science research. Based on the agreement between the US and the EC, data from the Sentinel missions will be distributed by NASA through archives that mirror those established by ESA. The ASF DAAC is the designated archive and distributor for Sentinel-1 data. The data will be copied from the ESA archive to a rolling archive at the NASA Goddard center, and then pushed to a landing area at the ASF DAAC. The system at ASF DAAC will take the files as they arrive and put them through an ingest process. Ingest will populate the database with the information required to enable search and download of the data through Vertex, the ASF DAAC user interface. Metadata will be pushed to the NASA Common Metadata Repository, enabling data discovery through clients that utilize the repository. Visual metadata will be pushed to the NASA GIBS system for visualization through clients linked to that system. Data files will be archived in the DataDirect Networks (DDN) device that is the primary storage device for the ASF DAAC. A backup copy of the data will be placed in a second DDN device that serves as the disaster recovery solution for the ASF DAAC.

  9. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on

  10. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  11. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  12. Computational investigation of the discharge coefficient of bellmouth flow meters in engine test facilities

    NASA Astrophysics Data System (ADS)

    Sebourn, Charles Lynn

    2002-11-01

    In this thesis computation of the discharge coefficient of bellmouth flow meters installed in engine test facilities is presented. The discharge coefficient is a critical parameter for accurately calculating flow rate in any flow meter which operates by means of creating a pressure differential. Engine airflow is a critical performance parameter and therefore, it is necessary for engine test facilities to accurately measure airflow. In this report the author investigates the use of computational fluid dynamics using finite difference methods to calculate the flow in bellmouth flow meters and hence the discharge coefficient at any measurement station desired. Experimental boundary layer and core flow data was used to verify the capability of the WIND code to calculate the discharge coefficient accurately. Good results were obtained for Reynolds numbers equal to or greater than about three million which is the primary range of interest. After verifying the WIND code performance, results were calculated for a range of Reynolds numbers and Mach numbers. Also the variation in discharge coefficient as a function of measurement location was examined. It is demonstrated that by picking the proper location for pressure measurement, sensitivity to measurement location can be minimized. Also of interest was the effect of bellmouth geometry. Calculations were performed to investigate the effect of duct to bellmouth diameter ratio and the eccentricity of the bellmouth contraction. In general the effects of the beta ratio were seen to be quite small. For the eccentricity, the variation in discharge coefficient was as high as several percent for axial locations less than half a diameter downstream from the throat. The second portion of the thesis examined the effect of a turbofan engine stationed just downstream of the bellmouth flow meter. The study approximated this effect by examining a single fan stage installed in the duct. This calculation was performed by making use of a

  13. Research in Distributed Personal Computer-Based Information Systems. Volume 2

    DTIC Science & Technology

    1988-08-01

    monitored, is not addressed. Although the IPC monitoring facility has been tailored to support the needs of the Diamond and Cronus $ distributed systems...Thomas. E. Burke and S. Woinick, Cronus . A Distributed Operating System. Preliminary Sjstem/Subsystem Spectficat.on, UON Report 5260. February 1983

  14. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  15. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    PubMed

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  16. Genetic Algorithms in a Distributed Computing Environment Using PVM

    NASA Astrophysics Data System (ADS)

    Cronje, G. A.; Steeb, W.-H.

    The Parallel Virtual Machine (PVM) is a software system that enables a collection of heterogeneous computer systems to be used as a coherent and flexible concurrent computation resource. We show that genetic algorithms can be implemented using a Parallel Virtual Machine and C++. Problems with constraints are also discussed.

  17. Genetic algorithms in a distributed computing environment using PVM

    SciTech Connect

    Cronje, G.A.; Steeb, W.H.

    1997-04-01

    The Parallel Virtual Machine (PVM) is a software system that enables a collection of heterogeneous computer systems to be used as a coherent and flexible concurrent computation resource. We show that genetic algorithms can be implemented using a Parallel Virtual Machine and C++. Problems with constraints are also discussed.

  18. Integration of distributed computing into the drug discovery process.

    PubMed

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  19. Distributed network, wireless and cloud computing enabled 3-D ultrasound; a new medical technology paradigm.

    PubMed

    Meir, Arie; Rubinsky, Boris

    2009-11-19

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people.

  20. Computer-aided design drafting/manufacturing (CADD/M) facility preparation

    SciTech Connect

    Norton, F.J.

    1980-09-23

    Computer-Aided Design, Drafting and Manufacturing (CADD/M) equipment requires careful facilities preparation before installation takes place. This paper presents what a company should consider to ensure a proper installation. This includes consideration of working conditions. To get the most out of the system, the operators must be provided with a relaxed, comfortable environment, free from noise and other distractions. Such things as temperature requirements, lighting, power, security and fire protection are discussed. Also, future expansion needs are considered so that major construction will not be required for future years. Advanced planning in these areas is needed to ensure successful implementation of a CADD/M system. This will lead to considerable cost savings, and in the long run, improve the scheduling for an entire project, from initial design to final production. This careful preparation will minimize unplanned events and problem areas. These are ambitious goals but easily realized if a logical and rational plan is adopted in the same manner as that used in a typical development program.

  1. Assess and improve the sustainability of water treatment facility using Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Tejada-Martinez, Andres; Lei, Hongxia; Zhang, Qiong

    2016-11-01

    Fluids problems in water treatment industry are often simplified or omitted since the focus is usually on chemical process only. However hydraulics also plays an important role in determining effluent water quality. Recent studies have demonstrated that computational fluid dynamics (CFD) has the ability to simulate the physical and chemical processes in reactive flows in water treatment facilities, such as in chlorine and ozone disinfection tanks. This study presents the results from CFD simulations of reactive flow in an existing full-scale ozone disinfection tank and in potential designs. Through analysis of the simulation results, we found that baffling factor and CT10 are not optimal indicators of disinfection performance. We also found that the relationship between effluent CT (the product of disinfectant concentration and contact time) obtained from CT transport simulation and baffling factor depends on the location of ozone release. In addition, we analyzed the environmental and economic impacts of ozone disinfection tank designs and developed a composite indicator to quantify the sustainability of ozone disinfection tank in technological, environmental and economic dimensions.

  2. Formal Methods for Quality of Service Analysis in Component-Based Distributed Computing

    DTIC Science & Technology

    2003-12-01

    Component-Based Software Architecture is a promising solution for distributed computing . To develop high quality software, analysis of non-functional...based distributed computing is proposed and represented formally using Two-Level Grammar (TLG), an object-oriented formal specification language. TLG

  3. The Development of a Computer Assisted Distribution and Assignment (CADA) System for Navy Enlisted Personnel.

    ERIC Educational Resources Information Center

    Whitehead, Randall F.; And Others

    This report describes the development of a computerized system to assist Navy personnel managers in carrying out the functions associated with the distribution and assignment of enlisted personnel. This Computer Assisted Distribution and Assignment (CADA) System is aimed at the most efficient interaction between the computer and human manager to…

  4. Distributed computing and data storage in proteomics: many hands make light work, and a stronger memory.

    PubMed

    Verheggen, Kenneth; Barsnes, Harald; Martens, Lennart

    2014-03-01

    Modern day proteomics generates ever more complex data, causing the requirements on the storage and processing of such data to outgrow the capacity of most desktop computers. To cope with the increased computational demands, distributed architectures have gained substantial popularity in the recent years. In this review, we provide an overview of the current techniques for distributed computing, along with examples of how the techniques are currently being employed in the field of proteomics. We thus underline the benefits of distributed computing in proteomics, while also pointing out the potential issues and pitfalls involved.

  5. High-performance, distributed computing software libraries and services

    SciTech Connect

    Foster, Ian; Kesselman, Carl; Tuecke, Steven

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  6. Computational methods for the control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Cliff, E. M.; Powers, R. K.

    1986-01-01

    Finite dimensional approximation schemes that work well for distributed parameter systems are often not suitable for the analysis and implementation of feedback control systems. The relationship between approximation schemes for distributed parameter systems and their application to optimal control problems is discussed. A numerical example is given.

  7. Overview of the human brain as a distributed computing network

    SciTech Connect

    Gevins, A.S.

    1983-01-01

    The hierarchically organized human brain is viewed as a prime example of a massively parallel, adaptive information processing and process control system. A brief overview of the human brain is provided for computer architects, in hopes that the principles of massive parallelism, dense connectivity and self-organization of assemblies of processing elements will prove relevant to the design of fifth generation VLSI computing networks. 6 references.

  8. Computer Generation of Fourier Transform Libraries for Distributed Memory Architectures

    DTIC Science & Technology

    2010-12-01

    parallelization. Tensor contraction engine (TCE). The TCE compiler [ Baumgartner et al., 2005] is an exam- ple of a project, that like SPIRAL, uses a...pages 172–184, 2007. 2, 123 Gerald Baumgartner , Alexander Auer, David E. Bernholdt, Alina Bibireata, Venkatesh Choppella, Daniel Cociorva, Xiaoyang Gao... Michael J. Flynn. Some computer organizations and their effectiveness. IEEE Transactions on Computing, C-21:948, 1972. 39 MPI Forum. Message passing

  9. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Astrophysics Data System (ADS)

    Schuster, David M.

    1993-04-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  10. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  11. Measurement of depth distributions of (3)H and (14)C induced in concrete shielding of an electron accelerator facility.

    PubMed

    Endo, Akira; Harada, Yasunori; Kawasaki, Katsuya; Kikuchi, Masamitsu

    2004-06-01

    The estimation of radioactivity induced in concrete shielding is important for the decommissioning of accelerator facilities. Concentrations of (3)H and (14)C in the concrete shielding of an electron linear accelerator were measured, and the depth distributions of (3)H and (14)C and gamma-ray emitters were discussed in relation to their formation reactions.

  12. Prevalence, distribution, and molecular characterization of Salmonella recovered from swine finishing herds and a slaughter facility in Santa Catarina, Brazil

    USDA-ARS?s Scientific Manuscript database

    Swine are a reservoir for Salmonella spp., and pork and pork products are vehicles of Salmonella infections. The objective of this investigation was to determine the distribution and types of Salmonella in 12 swine finishing herds and a slaughter facility in Santa Catarina, Brazil. A total of 1,258 ...

  13. Evaluation of Near Field Atmospheric Dispersion Around Nuclear Facilities Using a Lorentzian Distribution Methodology

    SciTech Connect

    Hawkley, Gavin

    2014-01-01

    Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.

  14. Analysis of neutron flux distribution for the validation of computational methods for the optimization of research reactor utilization.

    PubMed

    Snoj, L; Trkov, A; Jaćimović, R; Rogan, P; Zerovnik, G; Ravnik, M

    2011-01-01

    In order to verify and validate the computational methods for neutron flux calculation in TRIGA research reactor calculations, a series of experiments has been performed. The neutron activation method was used to verify the calculated neutron flux distribution in the TRIGA reactor. Aluminium (99.9 wt%)-Gold (0.1 wt%) foils (disks of 5mm diameter and 0.2mm thick) were irradiated in 33 locations; 6 in the core and 27 in the carrousel facility in the reflector. The experimental results were compared to the calculations performed with Monte Carlo code MCNP using detailed geometrical model of the reactor. The calculated and experimental normalized reaction rates in the core are in very good agreement for both isotopes indicating that the material and geometrical properties of the reactor core are modelled well. In conclusion one can state that our computational model describes very well the neutron flux and reaction rate distribution in the reactor core. In the reflector however, the accuracy of the epithermal and thermal neutron flux distribution and attenuation is lower, mainly due to lack of information about the material properties of the graphite reflector surrounding the core, but the differences between measurements and calculations are within 10%. Since our computational model properly describes the reactor core it can be used for calculations of reactor core parameters and for optimization of research reactor utilization.

  15. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  16. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  17. Distributed Computer Systems for the Republic of Turkish Navy.

    DTIC Science & Technology

    1985-12-01

    52 1. Resource and Process Management ...... .52 2. Scheduling ...... ................ 54 IV. SELECTION AN DESIGN OF DISTRIBUTED...LAN). DCS covers many areas including the management of communications, operating systems, distributed database systems, concurrency, fault toleration...Recovery X Error control x x X x Logging X (X) (X) Sharing connections X X X X Network management X (X) x 13) ! i " 13 communication medium is a branching

  18. Nested Transactions: An Approach to Reliable Distributed Computing.

    DTIC Science & Technology

    1981-04-01

    by showing how to detect deadlocks among nested transactions in a distributed system, and how to make a reasonably strong guarantee that any well...given below. 3.3.3 Reliable Distributed Commitment. To abort or commit a transaction correctly, we must make sure that either all its updates are written...that the coordinator not make its record that it is completing the transaction until after all the participants have responded prcpared. The rest of the

  19. Intelligent Decentralized Control In Large Distributed Computer Systems

    DTIC Science & Technology

    1988-04-01

    of managing this complexity. Ideally, control is distributed so that each agent accepts part of the burden of control and contributes to the...important in considering present 4 Introduction Ch ap. 1 (and, even more, future) distributed systems, whose parts are often owned by different...the mind-set of an agent taking part in a decentralized control scheme. It is summed up succinctly in the phrase: think globally, act locally. An

  20. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  1. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  2. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  3. A study of standard building blocks for the design of fault-tolerant distributed computer systems

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.; Avizienis, A.; Ercegovac, M.

    1978-01-01

    This paper presents the results of a study that has established a standard set of four semiconductor VLSI building-block circuits. These circuits can be assembled with off-the-shelf microprocessors and semiconductor memory modules into fault-tolerant distributed computer configurations. The resulting multi-computer architecture uses self-checking computer modules backed up by a limited number of spares. A redundant bus system is employed for communication between computer modules.

  4. Lattice gauge theory on a massively parallel computing facility. Final report

    SciTech Connect

    Sugar, R.

    1998-08-07

    This grant provided access to the massively parallel computing facilities at Oak Ridge National Laboratory for the study of lattice gauge theory. The major project was a calculation of the weak decay constants of pseudoscalar mesons with one light and one heavy quark. A number of these constants have not yet been measured, so the calculations constituted a set of predictions which will be tested by future experiments. More importantly, f{sub B} and f{sub B{sub s}}, the decay constants of the B and B{sub s} mesons, are crucial inputs for extracting information regarding the CKM matrix element V{sub td} from experimental measurements of B-{anti B} mixing, and future measurements of B{sub s}-{anti B}{sub s} mixing planned for the B-factory currently under construction at the Stanford Linear Accelerator Center. V{sub td} is one of the least well determined parameters of the Standard Model of High Energy Physics. It does not appear likely that F{sub B} and f{sub B{sub s}} will be measured experimentally in the near future, so lattice calculations such as this will play a crucial role in extracting information about the Standard Model from the B-factory experiments. The author has carried out the most accurate calculations of the heavy-light decay constants to date within the quenched approximation, that is ignoring the effects of sea quarks. Furthermore, his was the only group to have estimated the errors in the decay constants associated with the quenched approximation.

  5. Distributed sequence alignment applications for the public computing architecture.

    PubMed

    Pellicer, S; Chen, G; Chan, K C C; Pan, Y

    2008-03-01

    The public computer architecture shows promise as a platform for solving fundamental problems in bioinformatics such as global gene sequence alignment and data mining with tools such as the basic local alignment search tool (BLAST). Our implementation of these two problems on the Berkeley open infrastructure for network computing (BOINC) platform demonstrates a runtime reduction factor of 1.15 for sequence alignment and 16.76 for BLAST. While the runtime reduction factor of the global gene sequence alignment application is modest, this value is based on a theoretical sequential runtime extrapolated from the calculation of a smaller problem. Because this runtime is extrapolated from running the calculation in memory, the theoretical sequential runtime would require 37.3 GB of memory on a single system. With this in mind, the BOINC implementation not only offers the reduced runtime, but also the aggregation of the available memory of all participant nodes. If an actual sequential run of the problem were compared, a more drastic reduction in the runtime would be seen due to an additional secondary storage I/O overhead for a practical system. Despite the limitations of the public computer architecture, most notably in communication overhead, it represents a practical platform for grid- and cluster-scale bioinformatics computations today and shows great potential for future implementations.

  6. Learning General Phonological Rules from Distributional Information: A Computational Model

    ERIC Educational Resources Information Center

    Calamaro, Shira; Jarosz, Gaja

    2015-01-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony…

  7. Distributed computing for autonomous on board planning and sequence validations

    NASA Technical Reports Server (NTRS)

    Ko, A. Y.; Alkalai, L.; Chau, S.; Cheung, K.; Tong, D.; Maldague, P. F.

    2002-01-01

    We propose a new conceptual approach to system-level autonomy that exploits in a synergistic way recent breakthroughs in three specific areas: automatic generation of embeddable planning and validation software, integration of telecommunications forecaster and planning tools, and fault-tolerant assignment of computing tasks to multiple processors.

  8. Polytopol computing for multi-core and distributed systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  9. Distributed UHV system for the folded tandem ion accelerator facility at BARC

    NASA Astrophysics Data System (ADS)

    Gupta, S. K.; Agarwal, A.; Singh, S. K.; Basu, A.; P, Sapna; Sarode, S. P.; Singh, V. P.; Subrahmanyam, N. B. V.; Bhatt, J. P.; Pol, S. S.; Raut, P. J.; Ware, S. V.; Singh, P.; Choudhury, R. K.; Kailas, S.

    2008-05-01

    The 6 MV Folded Tandem Ion Accelerator (FOTIA) Facility at the Nuclear Physics Division, BARC is operational and accelerated beams of both light and heavy ions are being used extensively for basic and applied research. An average vacuum of the order of 10-8-10-9 Torr is maintained for maximum beam transmission and minimum beam energy spreads. The FOTIA vacuum system comprises of about 55 meter long, 100 mm diameter beam lines including various diagnostic devices, two accelerating tubes and four narrow vacuum chambers. The cross sections of the vacuum chambers are 14mm × 24mm for 180°, 38mm × 60mm and 19 × 44 mm for the and 70° & 90° bending magnets and Switching chambers respectively. All the beam line components are UHV compatible, fabricated from stainless steel 304L grade material fitted with metal gaskets. The total volume ~5.8 × 105 cm3 and surface area of 4.6 × 104 cm2, interspersed with total 18 pumping stations. The accelerating tubes are subjected to very high voltage gradient, 20.4 kV/cm, which requires a hydrocarbon free and clean vacuum for smooth operation of the accelerator. Vacuum interlocks are provided to various devices for safe operation of the accelerator. Specially designed sputter ion pumps for higher environmental pressure of 8 atmospheres are used to pump the accelerating tubes and the vacuum chamber for the 180° bending magnet. Fast acting valves are provided for isolating main accelerator against accidental air rush from rest of the beam lines. All the vacuum readings are displayed locally and are also available remotely through computer interface to the Control Room. Vacuum system details are described in this paper.

  10. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  11. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  12. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  13. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  14. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  15. Partitioning problems in parallel, pipelined, and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1988-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple-computer system is addressed. A sum-bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple-satellite system: partitioning multiple chain-structured parallel programs, multiple arbitrarily structured serial programs, and single-tree structured parallel programs. In addition, the problem of partitioning chain-structured parallel programs across chain-connected systems is solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple-computer architectures for a wide range of problems of practical interest.

  16. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  17. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  18. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect

    Johnson, A.J.

    1992-04-01

    This technology report describes Fiber Distributed Data Interface (FDDI) as a technology, looks at the applications of this technology, examines the current economics of using it, and describe activities and plans by the Information Resource Management Department to implement this technology at the Savannah River Site.

  19. Reviews of computing technology: Fiber distributed data interface. Revision

    SciTech Connect

    Johnson, A.J.

    1992-04-01

    This technology report describes Fiber Distributed Data Interface (FDDI) as a technology, looks at the applications of this technology, examines the current economics of using it, and describe activities and plans by the Information Resource Management Department to implement this technology at the Savannah River Site.

  20. School Facilities Funding and Capital-Outlay Distribution in the States

    ERIC Educational Resources Information Center

    Duncombe, William; Wang, Wen

    2009-01-01

    Traditionally, financing the construction of school facilities has been a local responsibility. In the past several decades, states have increased their support for school facilities. Using data collected from various sources, this study first classifies the design of capital aid programs in all 50 states into various categories based on the scope…